00:00:00.001 Started by upstream project "autotest-nightly" build number 3339 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 2733 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.047 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.048 The recommended git tool is: git 00:00:00.048 using credential 00000000-0000-0000-0000-000000000002 00:00:00.053 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.069 Fetching changes from the remote Git repository 00:00:00.072 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.092 Using shallow fetch with depth 1 00:00:00.092 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.092 > git --version # timeout=10 00:00:00.120 > git --version # 'git version 2.39.2' 00:00:00.120 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.120 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.121 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.428 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.439 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.451 Checking out Revision 10b73a6b8d61c05f3981f9d6fab712fcdadeb236 (FETCH_HEAD) 00:00:02.451 > git config core.sparsecheckout # timeout=10 00:00:02.461 > git read-tree -mu HEAD # timeout=10 00:00:02.476 > git checkout -f 10b73a6b8d61c05f3981f9d6fab712fcdadeb236 # timeout=5 00:00:02.496 Commit message: "jenkins/check-jenkins-labels: add ExtraStorage label" 00:00:02.496 > git rev-list --no-walk 10b73a6b8d61c05f3981f9d6fab712fcdadeb236 # timeout=10 00:00:02.719 [Pipeline] Start of Pipeline 00:00:02.733 [Pipeline] library 00:00:02.734 Loading library shm_lib@master 00:00:02.735 Library shm_lib@master is cached. Copying from home. 00:00:02.749 [Pipeline] node 00:00:02.758 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:02.759 [Pipeline] { 00:00:02.768 [Pipeline] catchError 00:00:02.769 [Pipeline] { 00:00:02.778 [Pipeline] wrap 00:00:02.784 [Pipeline] { 00:00:02.790 [Pipeline] stage 00:00:02.791 [Pipeline] { (Prologue) 00:00:02.805 [Pipeline] echo 00:00:02.806 Node: VM-host-SM0 00:00:02.811 [Pipeline] cleanWs 00:00:02.821 [WS-CLEANUP] Deleting project workspace... 00:00:02.821 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.826 [WS-CLEANUP] done 00:00:02.973 [Pipeline] setCustomBuildProperty 00:00:03.021 [Pipeline] nodesByLabel 00:00:03.022 Found a total of 2 nodes with the 'sorcerer' label 00:00:03.028 [Pipeline] httpRequest 00:00:03.032 HttpMethod: GET 00:00:03.033 URL: http://10.211.11.40/jbp_10b73a6b8d61c05f3981f9d6fab712fcdadeb236.tar.gz 00:00:03.038 Sending request to url: http://10.211.11.40/jbp_10b73a6b8d61c05f3981f9d6fab712fcdadeb236.tar.gz 00:00:03.041 Response Code: HTTP/1.1 200 OK 00:00:03.041 Success: Status code 200 is in the accepted range: 200,404 00:00:03.041 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_10b73a6b8d61c05f3981f9d6fab712fcdadeb236.tar.gz 00:00:03.166 [Pipeline] sh 00:00:03.442 + tar --no-same-owner -xf jbp_10b73a6b8d61c05f3981f9d6fab712fcdadeb236.tar.gz 00:00:03.453 [Pipeline] httpRequest 00:00:03.456 HttpMethod: GET 00:00:03.456 URL: http://10.211.11.40/spdk_aa824ae66823f5ea665c4713c1fa0c6963b5c3b2.tar.gz 00:00:03.457 Sending request to url: http://10.211.11.40/spdk_aa824ae66823f5ea665c4713c1fa0c6963b5c3b2.tar.gz 00:00:03.458 Response Code: HTTP/1.1 200 OK 00:00:03.458 Success: Status code 200 is in the accepted range: 200,404 00:00:03.459 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_aa824ae66823f5ea665c4713c1fa0c6963b5c3b2.tar.gz 00:00:19.160 [Pipeline] sh 00:00:19.442 + tar --no-same-owner -xf spdk_aa824ae66823f5ea665c4713c1fa0c6963b5c3b2.tar.gz 00:00:21.994 [Pipeline] sh 00:00:22.276 + git -C spdk log --oneline -n5 00:00:22.276 aa824ae66 bdevperf: remove max io size limit for verify 00:00:22.276 161ef3f54 scripts/perf: Rename vhost_*master_core to vhost_*main_core 00:00:22.276 8bba6ed63 fuzz/llvm_vfio_fuzz: Adjust array index to avoid overflow 00:00:22.276 387dbedc4 env_dpdk: fix build with OpenSSL < 3.0.0 00:00:22.276 2b5de63c1 include: ensure ENOKEY is defined on FreeBSD 00:00:22.293 [Pipeline] writeFile 00:00:22.308 [Pipeline] sh 00:00:22.588 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:22.599 [Pipeline] sh 00:00:22.879 + cat autorun-spdk.conf 00:00:22.879 RUN_NIGHTLY=1 00:00:22.879 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:22.879 SPDK_TEST_NVMF=1 00:00:22.879 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:22.879 SPDK_TEST_VFIOUSER=1 00:00:22.879 SPDK_TEST_USDT=1 00:00:22.879 SPDK_RUN_UBSAN=1 00:00:22.879 SPDK_TEST_NVMF_MDNS=1 00:00:22.879 NET_TYPE=virt 00:00:22.879 SPDK_JSONRPC_GO_CLIENT=1 00:00:22.886 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:22.887 [Pipeline] } 00:00:22.902 [Pipeline] // stage 00:00:22.914 [Pipeline] stage 00:00:22.916 [Pipeline] { (Run VM) 00:00:22.928 [Pipeline] sh 00:00:23.209 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:23.209 + echo 'Start stage prepare_nvme.sh' 00:00:23.209 Start stage prepare_nvme.sh 00:00:23.209 + [[ -n 4 ]] 00:00:23.209 + disk_prefix=ex4 00:00:23.209 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:00:23.209 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:00:23.209 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:00:23.209 ++ RUN_NIGHTLY=1 00:00:23.209 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:23.209 ++ SPDK_TEST_NVMF=1 00:00:23.209 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:23.209 ++ SPDK_TEST_VFIOUSER=1 00:00:23.209 ++ SPDK_TEST_USDT=1 00:00:23.209 ++ SPDK_RUN_UBSAN=1 00:00:23.209 ++ SPDK_TEST_NVMF_MDNS=1 00:00:23.209 ++ NET_TYPE=virt 00:00:23.209 ++ SPDK_JSONRPC_GO_CLIENT=1 00:00:23.209 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:23.209 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:23.209 + nvme_files=() 00:00:23.209 + declare -A nvme_files 00:00:23.209 + backend_dir=/var/lib/libvirt/images/backends 00:00:23.209 + nvme_files['nvme.img']=5G 00:00:23.209 + nvme_files['nvme-cmb.img']=5G 00:00:23.209 + nvme_files['nvme-multi0.img']=4G 00:00:23.209 + nvme_files['nvme-multi1.img']=4G 00:00:23.209 + nvme_files['nvme-multi2.img']=4G 00:00:23.209 + nvme_files['nvme-openstack.img']=8G 00:00:23.209 + nvme_files['nvme-zns.img']=5G 00:00:23.209 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:23.209 + (( SPDK_TEST_FTL == 1 )) 00:00:23.209 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:23.209 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:23.209 + for nvme in "${!nvme_files[@]}" 00:00:23.209 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:00:23.209 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:23.209 + for nvme in "${!nvme_files[@]}" 00:00:23.209 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:00:23.209 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:23.209 + for nvme in "${!nvme_files[@]}" 00:00:23.209 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:00:23.209 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:23.209 + for nvme in "${!nvme_files[@]}" 00:00:23.209 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:00:23.209 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:23.209 + for nvme in "${!nvme_files[@]}" 00:00:23.209 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:00:23.209 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:23.209 + for nvme in "${!nvme_files[@]}" 00:00:23.209 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:00:23.468 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:23.468 + for nvme in "${!nvme_files[@]}" 00:00:23.468 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:00:23.468 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:23.468 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:00:23.468 + echo 'End stage prepare_nvme.sh' 00:00:23.468 End stage prepare_nvme.sh 00:00:23.479 [Pipeline] sh 00:00:23.760 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:23.760 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora38 00:00:23.760 00:00:23.760 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:00:23.760 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:00:23.760 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:23.760 HELP=0 00:00:23.760 DRY_RUN=0 00:00:23.760 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:00:23.760 NVME_DISKS_TYPE=nvme,nvme, 00:00:23.760 NVME_AUTO_CREATE=0 00:00:23.760 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:00:23.760 NVME_CMB=,, 00:00:23.760 NVME_PMR=,, 00:00:23.760 NVME_ZNS=,, 00:00:23.760 NVME_MS=,, 00:00:23.760 NVME_FDP=,, 00:00:23.760 SPDK_VAGRANT_DISTRO=fedora38 00:00:23.760 SPDK_VAGRANT_VMCPU=10 00:00:23.760 SPDK_VAGRANT_VMRAM=12288 00:00:23.760 SPDK_VAGRANT_PROVIDER=libvirt 00:00:23.760 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:23.760 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:23.760 SPDK_OPENSTACK_NETWORK=0 00:00:23.761 VAGRANT_PACKAGE_BOX=0 00:00:23.761 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:23.761 FORCE_DISTRO=true 00:00:23.761 VAGRANT_BOX_VERSION= 00:00:23.761 EXTRA_VAGRANTFILES= 00:00:23.761 NIC_MODEL=e1000 00:00:23.761 00:00:23.761 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:00:23.761 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:27.050 Bringing machine 'default' up with 'libvirt' provider... 00:00:27.309 ==> default: Creating image (snapshot of base box volume). 00:00:27.569 ==> default: Creating domain with the following settings... 00:00:27.569 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1707937264_59296baabbaf46f14cf6 00:00:27.569 ==> default: -- Domain type: kvm 00:00:27.569 ==> default: -- Cpus: 10 00:00:27.569 ==> default: -- Feature: acpi 00:00:27.569 ==> default: -- Feature: apic 00:00:27.569 ==> default: -- Feature: pae 00:00:27.569 ==> default: -- Memory: 12288M 00:00:27.569 ==> default: -- Memory Backing: hugepages: 00:00:27.569 ==> default: -- Management MAC: 00:00:27.569 ==> default: -- Loader: 00:00:27.569 ==> default: -- Nvram: 00:00:27.569 ==> default: -- Base box: spdk/fedora38 00:00:27.569 ==> default: -- Storage pool: default 00:00:27.569 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1707937264_59296baabbaf46f14cf6.img (20G) 00:00:27.569 ==> default: -- Volume Cache: default 00:00:27.569 ==> default: -- Kernel: 00:00:27.569 ==> default: -- Initrd: 00:00:27.569 ==> default: -- Graphics Type: vnc 00:00:27.569 ==> default: -- Graphics Port: -1 00:00:27.569 ==> default: -- Graphics IP: 127.0.0.1 00:00:27.569 ==> default: -- Graphics Password: Not defined 00:00:27.569 ==> default: -- Video Type: cirrus 00:00:27.569 ==> default: -- Video VRAM: 9216 00:00:27.569 ==> default: -- Sound Type: 00:00:27.569 ==> default: -- Keymap: en-us 00:00:27.569 ==> default: -- TPM Path: 00:00:27.569 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:27.569 ==> default: -- Command line args: 00:00:27.569 ==> default: -> value=-device, 00:00:27.569 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:00:27.569 ==> default: -> value=-drive, 00:00:27.569 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:00:27.569 ==> default: -> value=-device, 00:00:27.569 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:27.569 ==> default: -> value=-device, 00:00:27.569 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:00:27.569 ==> default: -> value=-drive, 00:00:27.569 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:27.569 ==> default: -> value=-device, 00:00:27.569 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:27.569 ==> default: -> value=-drive, 00:00:27.569 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:27.569 ==> default: -> value=-device, 00:00:27.569 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:27.569 ==> default: -> value=-drive, 00:00:27.569 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:27.569 ==> default: -> value=-device, 00:00:27.569 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:27.569 ==> default: Creating shared folders metadata... 00:00:27.569 ==> default: Starting domain. 00:00:29.476 ==> default: Waiting for domain to get an IP address... 00:00:47.600 ==> default: Waiting for SSH to become available... 00:00:48.536 ==> default: Configuring and enabling network interfaces... 00:00:53.812 default: SSH address: 192.168.121.5:22 00:00:53.812 default: SSH username: vagrant 00:00:53.812 default: SSH auth method: private key 00:00:55.716 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:03.830 ==> default: Mounting SSHFS shared folder... 00:01:04.763 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:04.763 ==> default: Checking Mount.. 00:01:06.137 ==> default: Folder Successfully Mounted! 00:01:06.138 ==> default: Running provisioner: file... 00:01:06.705 default: ~/.gitconfig => .gitconfig 00:01:07.273 00:01:07.273 SUCCESS! 00:01:07.273 00:01:07.273 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:07.273 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:07.273 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:07.273 00:01:07.282 [Pipeline] } 00:01:07.298 [Pipeline] // stage 00:01:07.306 [Pipeline] dir 00:01:07.307 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:01:07.308 [Pipeline] { 00:01:07.322 [Pipeline] catchError 00:01:07.323 [Pipeline] { 00:01:07.336 [Pipeline] sh 00:01:07.616 + vagrant ssh-config --host vagrant 00:01:07.616 + sed -ne /^Host/,$p 00:01:07.616 + tee ssh_conf 00:01:10.905 Host vagrant 00:01:10.905 HostName 192.168.121.5 00:01:10.905 User vagrant 00:01:10.905 Port 22 00:01:10.905 UserKnownHostsFile /dev/null 00:01:10.905 StrictHostKeyChecking no 00:01:10.905 PasswordAuthentication no 00:01:10.905 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:01:10.905 IdentitiesOnly yes 00:01:10.905 LogLevel FATAL 00:01:10.905 ForwardAgent yes 00:01:10.905 ForwardX11 yes 00:01:10.905 00:01:10.918 [Pipeline] withEnv 00:01:10.919 [Pipeline] { 00:01:10.934 [Pipeline] sh 00:01:11.214 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:11.215 source /etc/os-release 00:01:11.215 [[ -e /image.version ]] && img=$(< /image.version) 00:01:11.215 # Minimal, systemd-like check. 00:01:11.215 if [[ -e /.dockerenv ]]; then 00:01:11.215 # Clear garbage from the node's name: 00:01:11.215 # agt-er_autotest_547-896 -> autotest_547-896 00:01:11.215 agent=${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:11.215 if mountpoint -q /etc/hostname; then 00:01:11.215 # We can assume this is a mount from a host where container is running, 00:01:11.215 # so fetch its hostname to easily identify the target swarm worker. 00:01:11.215 container="$(< /etc/hostname) ($agent)" 00:01:11.215 else 00:01:11.215 # Fallback 00:01:11.215 container=$agent 00:01:11.215 fi 00:01:11.215 fi 00:01:11.215 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:11.215 00:01:11.484 [Pipeline] } 00:01:11.497 [Pipeline] // withEnv 00:01:11.503 [Pipeline] setCustomBuildProperty 00:01:11.513 [Pipeline] stage 00:01:11.515 [Pipeline] { (Tests) 00:01:11.528 [Pipeline] sh 00:01:11.807 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:12.077 [Pipeline] timeout 00:01:12.078 Timeout set to expire in 40 min 00:01:12.079 [Pipeline] { 00:01:12.095 [Pipeline] sh 00:01:12.376 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:12.944 HEAD is now at aa824ae66 bdevperf: remove max io size limit for verify 00:01:12.955 [Pipeline] sh 00:01:13.233 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:13.504 [Pipeline] sh 00:01:13.781 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:14.053 [Pipeline] sh 00:01:14.329 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:01:14.329 ++ readlink -f spdk_repo 00:01:14.329 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:14.329 + [[ -n /home/vagrant/spdk_repo ]] 00:01:14.329 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:14.329 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:14.329 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:14.329 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:14.329 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:14.329 + cd /home/vagrant/spdk_repo 00:01:14.330 + source /etc/os-release 00:01:14.330 ++ NAME='Fedora Linux' 00:01:14.330 ++ VERSION='38 (Cloud Edition)' 00:01:14.330 ++ ID=fedora 00:01:14.330 ++ VERSION_ID=38 00:01:14.330 ++ VERSION_CODENAME= 00:01:14.330 ++ PLATFORM_ID=platform:f38 00:01:14.330 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:14.330 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:14.330 ++ LOGO=fedora-logo-icon 00:01:14.330 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:14.330 ++ HOME_URL=https://fedoraproject.org/ 00:01:14.330 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:14.330 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:14.330 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:14.330 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:14.330 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:14.330 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:14.330 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:14.330 ++ SUPPORT_END=2024-05-14 00:01:14.330 ++ VARIANT='Cloud Edition' 00:01:14.330 ++ VARIANT_ID=cloud 00:01:14.330 + uname -a 00:01:14.588 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:14.588 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:14.588 Hugepages 00:01:14.588 node hugesize free / total 00:01:14.588 node0 1048576kB 0 / 0 00:01:14.588 node0 2048kB 0 / 0 00:01:14.588 00:01:14.588 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:14.588 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:14.588 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:14.588 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:14.846 + rm -f /tmp/spdk-ld-path 00:01:14.846 + source autorun-spdk.conf 00:01:14.846 ++ RUN_NIGHTLY=1 00:01:14.846 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.846 ++ SPDK_TEST_NVMF=1 00:01:14.846 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:14.846 ++ SPDK_TEST_VFIOUSER=1 00:01:14.846 ++ SPDK_TEST_USDT=1 00:01:14.846 ++ SPDK_RUN_UBSAN=1 00:01:14.846 ++ SPDK_TEST_NVMF_MDNS=1 00:01:14.846 ++ NET_TYPE=virt 00:01:14.846 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:14.846 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:14.846 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:14.846 + [[ -n '' ]] 00:01:14.846 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:14.846 + for M in /var/spdk/build-*-manifest.txt 00:01:14.846 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:14.846 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:14.846 + for M in /var/spdk/build-*-manifest.txt 00:01:14.846 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:14.846 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:14.847 ++ uname 00:01:14.847 + [[ Linux == \L\i\n\u\x ]] 00:01:14.847 + sudo dmesg -T 00:01:14.847 + sudo dmesg --clear 00:01:14.847 + dmesg_pid=5135 00:01:14.847 + sudo dmesg -Tw 00:01:14.847 + [[ Fedora Linux == FreeBSD ]] 00:01:14.847 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:14.847 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:14.847 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:14.847 + [[ -x /usr/src/fio-static/fio ]] 00:01:14.847 + export FIO_BIN=/usr/src/fio-static/fio 00:01:14.847 + FIO_BIN=/usr/src/fio-static/fio 00:01:14.847 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:14.847 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:14.847 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:14.847 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:14.847 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:14.847 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:14.847 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:14.847 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:14.847 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:14.847 Test configuration: 00:01:14.847 RUN_NIGHTLY=1 00:01:14.847 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.847 SPDK_TEST_NVMF=1 00:01:14.847 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:14.847 SPDK_TEST_VFIOUSER=1 00:01:14.847 SPDK_TEST_USDT=1 00:01:14.847 SPDK_RUN_UBSAN=1 00:01:14.847 SPDK_TEST_NVMF_MDNS=1 00:01:14.847 NET_TYPE=virt 00:01:14.847 SPDK_JSONRPC_GO_CLIENT=1 00:01:14.847 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 19:01:52 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:14.847 19:01:52 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:14.847 19:01:52 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:14.847 19:01:52 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:14.847 19:01:52 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.847 19:01:52 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.847 19:01:52 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.847 19:01:52 -- paths/export.sh@5 -- $ export PATH 00:01:14.847 19:01:52 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.847 19:01:52 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:14.847 19:01:52 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:14.847 19:01:52 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1707937312.XXXXXX 00:01:14.847 19:01:52 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1707937312.AL7rp3 00:01:14.847 19:01:52 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:14.847 19:01:52 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:14.847 19:01:52 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:14.847 19:01:52 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:14.847 19:01:52 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:14.847 19:01:52 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:14.847 19:01:52 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:01:14.847 19:01:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.847 19:01:52 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:01:15.105 19:01:52 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:15.105 19:01:52 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:15.105 19:01:52 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:15.105 19:01:52 -- spdk/autobuild.sh@16 -- $ date -u 00:01:15.105 Wed Feb 14 07:01:52 PM UTC 2024 00:01:15.105 19:01:52 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:15.105 v24.05-pre-81-gaa824ae66 00:01:15.105 19:01:52 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:15.105 19:01:52 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:15.105 19:01:52 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:15.105 19:01:52 -- common/autotest_common.sh@1075 -- $ '[' 3 -le 1 ']' 00:01:15.105 19:01:52 -- common/autotest_common.sh@1081 -- $ xtrace_disable 00:01:15.105 19:01:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.105 ************************************ 00:01:15.105 START TEST ubsan 00:01:15.105 ************************************ 00:01:15.105 using ubsan 00:01:15.105 19:01:52 -- common/autotest_common.sh@1102 -- $ echo 'using ubsan' 00:01:15.105 00:01:15.105 real 0m0.000s 00:01:15.105 user 0m0.000s 00:01:15.105 sys 0m0.000s 00:01:15.105 19:01:52 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:15.105 19:01:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.105 ************************************ 00:01:15.105 END TEST ubsan 00:01:15.105 ************************************ 00:01:15.105 19:01:52 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:15.105 19:01:52 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:15.105 19:01:52 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:15.105 19:01:52 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:15.105 19:01:52 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:15.105 19:01:52 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:15.105 19:01:52 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:15.105 19:01:52 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:15.105 19:01:52 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang --with-shared 00:01:15.364 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:15.364 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:15.623 Using 'verbs' RDMA provider 00:01:31.070 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:01:43.278 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:43.278 go version go1.21.1 linux/amd64 00:01:43.278 Creating mk/config.mk...done. 00:01:43.278 Creating mk/cc.flags.mk...done. 00:01:43.278 Type 'make' to build. 00:01:43.278 19:02:19 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:01:43.278 19:02:19 -- common/autotest_common.sh@1075 -- $ '[' 3 -le 1 ']' 00:01:43.278 19:02:19 -- common/autotest_common.sh@1081 -- $ xtrace_disable 00:01:43.278 19:02:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.278 ************************************ 00:01:43.278 START TEST make 00:01:43.278 ************************************ 00:01:43.278 19:02:19 -- common/autotest_common.sh@1102 -- $ make -j10 00:01:43.279 make[1]: Nothing to be done for 'all'. 00:01:44.211 The Meson build system 00:01:44.211 Version: 1.3.1 00:01:44.211 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:01:44.211 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:01:44.211 Build type: native build 00:01:44.211 Project name: libvfio-user 00:01:44.211 Project version: 0.0.1 00:01:44.211 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:44.211 C linker for the host machine: cc ld.bfd 2.39-16 00:01:44.211 Host machine cpu family: x86_64 00:01:44.211 Host machine cpu: x86_64 00:01:44.211 Run-time dependency threads found: YES 00:01:44.211 Library dl found: YES 00:01:44.211 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:44.211 Run-time dependency json-c found: YES 0.17 00:01:44.211 Run-time dependency cmocka found: YES 1.1.7 00:01:44.211 Program pytest-3 found: NO 00:01:44.211 Program flake8 found: NO 00:01:44.211 Program misspell-fixer found: NO 00:01:44.211 Program restructuredtext-lint found: NO 00:01:44.211 Program valgrind found: YES (/usr/bin/valgrind) 00:01:44.211 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:44.211 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:44.211 Compiler for C supports arguments -Wwrite-strings: YES 00:01:44.211 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:44.211 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:01:44.211 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:01:44.211 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:44.211 Build targets in project: 8 00:01:44.211 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:44.211 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:44.211 00:01:44.211 libvfio-user 0.0.1 00:01:44.211 00:01:44.211 User defined options 00:01:44.211 buildtype : debug 00:01:44.212 default_library: shared 00:01:44.212 libdir : /usr/local/lib 00:01:44.212 00:01:44.212 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:44.469 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:01:44.727 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:44.727 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:44.727 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:44.727 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:44.727 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:44.727 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:44.986 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:44.986 [8/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:44.986 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:44.986 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:44.986 [11/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:44.986 [12/37] Compiling C object samples/client.p/client.c.o 00:01:44.986 [13/37] Compiling C object samples/null.p/null.c.o 00:01:44.986 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:44.986 [15/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:44.986 [16/37] Linking target samples/client 00:01:44.986 [17/37] Compiling C object samples/server.p/server.c.o 00:01:44.986 [18/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:44.986 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:45.244 [20/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:45.244 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:45.244 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:45.244 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:45.244 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:45.244 [25/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:45.244 [26/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:45.244 [27/37] Linking target lib/libvfio-user.so.0.0.1 00:01:45.244 [28/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:45.244 [29/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:45.244 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:45.244 [31/37] Linking target test/unit_tests 00:01:45.244 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:45.502 [33/37] Linking target samples/null 00:01:45.502 [34/37] Linking target samples/server 00:01:45.502 [35/37] Linking target samples/gpio-pci-idio-16 00:01:45.502 [36/37] Linking target samples/lspci 00:01:45.503 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:45.503 INFO: autodetecting backend as ninja 00:01:45.503 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:01:45.503 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:01:46.069 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:01:46.069 ninja: no work to do. 00:01:54.183 The Meson build system 00:01:54.183 Version: 1.3.1 00:01:54.183 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:01:54.183 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:54.184 Build type: native build 00:01:54.184 Program cat found: YES (/usr/bin/cat) 00:01:54.184 Project name: DPDK 00:01:54.184 Project version: 23.11.0 00:01:54.184 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:54.184 C linker for the host machine: cc ld.bfd 2.39-16 00:01:54.184 Host machine cpu family: x86_64 00:01:54.184 Host machine cpu: x86_64 00:01:54.184 Message: ## Building in Developer Mode ## 00:01:54.184 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:54.184 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:54.184 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:54.184 Program python3 found: YES (/usr/bin/python3) 00:01:54.184 Program cat found: YES (/usr/bin/cat) 00:01:54.184 Compiler for C supports arguments -march=native: YES 00:01:54.184 Checking for size of "void *" : 8 00:01:54.184 Checking for size of "void *" : 8 (cached) 00:01:54.184 Library m found: YES 00:01:54.184 Library numa found: YES 00:01:54.184 Has header "numaif.h" : YES 00:01:54.184 Library fdt found: NO 00:01:54.184 Library execinfo found: NO 00:01:54.184 Has header "execinfo.h" : YES 00:01:54.184 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:54.184 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:54.184 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:54.184 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:54.184 Run-time dependency openssl found: YES 3.0.9 00:01:54.184 Run-time dependency libpcap found: YES 1.10.4 00:01:54.184 Has header "pcap.h" with dependency libpcap: YES 00:01:54.184 Compiler for C supports arguments -Wcast-qual: YES 00:01:54.184 Compiler for C supports arguments -Wdeprecated: YES 00:01:54.184 Compiler for C supports arguments -Wformat: YES 00:01:54.184 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:54.184 Compiler for C supports arguments -Wformat-security: NO 00:01:54.184 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:54.184 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:54.184 Compiler for C supports arguments -Wnested-externs: YES 00:01:54.184 Compiler for C supports arguments -Wold-style-definition: YES 00:01:54.184 Compiler for C supports arguments -Wpointer-arith: YES 00:01:54.184 Compiler for C supports arguments -Wsign-compare: YES 00:01:54.184 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:54.184 Compiler for C supports arguments -Wundef: YES 00:01:54.184 Compiler for C supports arguments -Wwrite-strings: YES 00:01:54.184 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:54.184 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:54.184 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:54.184 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:54.184 Program objdump found: YES (/usr/bin/objdump) 00:01:54.184 Compiler for C supports arguments -mavx512f: YES 00:01:54.184 Checking if "AVX512 checking" compiles: YES 00:01:54.184 Fetching value of define "__SSE4_2__" : 1 00:01:54.184 Fetching value of define "__AES__" : 1 00:01:54.184 Fetching value of define "__AVX__" : 1 00:01:54.184 Fetching value of define "__AVX2__" : 1 00:01:54.184 Fetching value of define "__AVX512BW__" : (undefined) 00:01:54.184 Fetching value of define "__AVX512CD__" : (undefined) 00:01:54.184 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:54.184 Fetching value of define "__AVX512F__" : (undefined) 00:01:54.184 Fetching value of define "__AVX512VL__" : (undefined) 00:01:54.184 Fetching value of define "__PCLMUL__" : 1 00:01:54.184 Fetching value of define "__RDRND__" : 1 00:01:54.184 Fetching value of define "__RDSEED__" : 1 00:01:54.184 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:54.184 Fetching value of define "__znver1__" : (undefined) 00:01:54.184 Fetching value of define "__znver2__" : (undefined) 00:01:54.184 Fetching value of define "__znver3__" : (undefined) 00:01:54.184 Fetching value of define "__znver4__" : (undefined) 00:01:54.184 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:54.184 Message: lib/log: Defining dependency "log" 00:01:54.184 Message: lib/kvargs: Defining dependency "kvargs" 00:01:54.184 Message: lib/telemetry: Defining dependency "telemetry" 00:01:54.184 Checking for function "getentropy" : NO 00:01:54.184 Message: lib/eal: Defining dependency "eal" 00:01:54.184 Message: lib/ring: Defining dependency "ring" 00:01:54.184 Message: lib/rcu: Defining dependency "rcu" 00:01:54.184 Message: lib/mempool: Defining dependency "mempool" 00:01:54.184 Message: lib/mbuf: Defining dependency "mbuf" 00:01:54.184 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:54.184 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:54.184 Compiler for C supports arguments -mpclmul: YES 00:01:54.184 Compiler for C supports arguments -maes: YES 00:01:54.184 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:54.184 Compiler for C supports arguments -mavx512bw: YES 00:01:54.184 Compiler for C supports arguments -mavx512dq: YES 00:01:54.184 Compiler for C supports arguments -mavx512vl: YES 00:01:54.184 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:54.184 Compiler for C supports arguments -mavx2: YES 00:01:54.184 Compiler for C supports arguments -mavx: YES 00:01:54.184 Message: lib/net: Defining dependency "net" 00:01:54.184 Message: lib/meter: Defining dependency "meter" 00:01:54.184 Message: lib/ethdev: Defining dependency "ethdev" 00:01:54.184 Message: lib/pci: Defining dependency "pci" 00:01:54.184 Message: lib/cmdline: Defining dependency "cmdline" 00:01:54.184 Message: lib/hash: Defining dependency "hash" 00:01:54.184 Message: lib/timer: Defining dependency "timer" 00:01:54.184 Message: lib/compressdev: Defining dependency "compressdev" 00:01:54.184 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:54.184 Message: lib/dmadev: Defining dependency "dmadev" 00:01:54.184 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:54.184 Message: lib/power: Defining dependency "power" 00:01:54.184 Message: lib/reorder: Defining dependency "reorder" 00:01:54.184 Message: lib/security: Defining dependency "security" 00:01:54.184 Has header "linux/userfaultfd.h" : YES 00:01:54.184 Has header "linux/vduse.h" : YES 00:01:54.184 Message: lib/vhost: Defining dependency "vhost" 00:01:54.184 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:54.184 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:54.184 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:54.184 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:54.184 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:54.184 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:54.184 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:54.184 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:54.184 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:54.184 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:54.184 Program doxygen found: YES (/usr/bin/doxygen) 00:01:54.184 Configuring doxy-api-html.conf using configuration 00:01:54.184 Configuring doxy-api-man.conf using configuration 00:01:54.184 Program mandb found: YES (/usr/bin/mandb) 00:01:54.184 Program sphinx-build found: NO 00:01:54.184 Configuring rte_build_config.h using configuration 00:01:54.184 Message: 00:01:54.184 ================= 00:01:54.184 Applications Enabled 00:01:54.184 ================= 00:01:54.184 00:01:54.184 apps: 00:01:54.184 00:01:54.184 00:01:54.184 Message: 00:01:54.184 ================= 00:01:54.184 Libraries Enabled 00:01:54.184 ================= 00:01:54.184 00:01:54.184 libs: 00:01:54.184 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:54.184 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:54.184 cryptodev, dmadev, power, reorder, security, vhost, 00:01:54.184 00:01:54.184 Message: 00:01:54.184 =============== 00:01:54.184 Drivers Enabled 00:01:54.184 =============== 00:01:54.184 00:01:54.184 common: 00:01:54.184 00:01:54.184 bus: 00:01:54.184 pci, vdev, 00:01:54.184 mempool: 00:01:54.184 ring, 00:01:54.184 dma: 00:01:54.184 00:01:54.184 net: 00:01:54.184 00:01:54.184 crypto: 00:01:54.184 00:01:54.184 compress: 00:01:54.184 00:01:54.184 vdpa: 00:01:54.184 00:01:54.184 00:01:54.184 Message: 00:01:54.184 ================= 00:01:54.184 Content Skipped 00:01:54.184 ================= 00:01:54.184 00:01:54.184 apps: 00:01:54.184 dumpcap: explicitly disabled via build config 00:01:54.184 graph: explicitly disabled via build config 00:01:54.184 pdump: explicitly disabled via build config 00:01:54.184 proc-info: explicitly disabled via build config 00:01:54.184 test-acl: explicitly disabled via build config 00:01:54.184 test-bbdev: explicitly disabled via build config 00:01:54.184 test-cmdline: explicitly disabled via build config 00:01:54.184 test-compress-perf: explicitly disabled via build config 00:01:54.184 test-crypto-perf: explicitly disabled via build config 00:01:54.184 test-dma-perf: explicitly disabled via build config 00:01:54.184 test-eventdev: explicitly disabled via build config 00:01:54.184 test-fib: explicitly disabled via build config 00:01:54.184 test-flow-perf: explicitly disabled via build config 00:01:54.184 test-gpudev: explicitly disabled via build config 00:01:54.184 test-mldev: explicitly disabled via build config 00:01:54.184 test-pipeline: explicitly disabled via build config 00:01:54.184 test-pmd: explicitly disabled via build config 00:01:54.184 test-regex: explicitly disabled via build config 00:01:54.184 test-sad: explicitly disabled via build config 00:01:54.184 test-security-perf: explicitly disabled via build config 00:01:54.184 00:01:54.184 libs: 00:01:54.184 metrics: explicitly disabled via build config 00:01:54.184 acl: explicitly disabled via build config 00:01:54.184 bbdev: explicitly disabled via build config 00:01:54.184 bitratestats: explicitly disabled via build config 00:01:54.184 bpf: explicitly disabled via build config 00:01:54.184 cfgfile: explicitly disabled via build config 00:01:54.185 distributor: explicitly disabled via build config 00:01:54.185 efd: explicitly disabled via build config 00:01:54.185 eventdev: explicitly disabled via build config 00:01:54.185 dispatcher: explicitly disabled via build config 00:01:54.185 gpudev: explicitly disabled via build config 00:01:54.185 gro: explicitly disabled via build config 00:01:54.185 gso: explicitly disabled via build config 00:01:54.185 ip_frag: explicitly disabled via build config 00:01:54.185 jobstats: explicitly disabled via build config 00:01:54.185 latencystats: explicitly disabled via build config 00:01:54.185 lpm: explicitly disabled via build config 00:01:54.185 member: explicitly disabled via build config 00:01:54.185 pcapng: explicitly disabled via build config 00:01:54.185 rawdev: explicitly disabled via build config 00:01:54.185 regexdev: explicitly disabled via build config 00:01:54.185 mldev: explicitly disabled via build config 00:01:54.185 rib: explicitly disabled via build config 00:01:54.185 sched: explicitly disabled via build config 00:01:54.185 stack: explicitly disabled via build config 00:01:54.185 ipsec: explicitly disabled via build config 00:01:54.185 pdcp: explicitly disabled via build config 00:01:54.185 fib: explicitly disabled via build config 00:01:54.185 port: explicitly disabled via build config 00:01:54.185 pdump: explicitly disabled via build config 00:01:54.185 table: explicitly disabled via build config 00:01:54.185 pipeline: explicitly disabled via build config 00:01:54.185 graph: explicitly disabled via build config 00:01:54.185 node: explicitly disabled via build config 00:01:54.185 00:01:54.185 drivers: 00:01:54.185 common/cpt: not in enabled drivers build config 00:01:54.185 common/dpaax: not in enabled drivers build config 00:01:54.185 common/iavf: not in enabled drivers build config 00:01:54.185 common/idpf: not in enabled drivers build config 00:01:54.185 common/mvep: not in enabled drivers build config 00:01:54.185 common/octeontx: not in enabled drivers build config 00:01:54.185 bus/auxiliary: not in enabled drivers build config 00:01:54.185 bus/cdx: not in enabled drivers build config 00:01:54.185 bus/dpaa: not in enabled drivers build config 00:01:54.185 bus/fslmc: not in enabled drivers build config 00:01:54.185 bus/ifpga: not in enabled drivers build config 00:01:54.185 bus/platform: not in enabled drivers build config 00:01:54.185 bus/vmbus: not in enabled drivers build config 00:01:54.185 common/cnxk: not in enabled drivers build config 00:01:54.185 common/mlx5: not in enabled drivers build config 00:01:54.185 common/nfp: not in enabled drivers build config 00:01:54.185 common/qat: not in enabled drivers build config 00:01:54.185 common/sfc_efx: not in enabled drivers build config 00:01:54.185 mempool/bucket: not in enabled drivers build config 00:01:54.185 mempool/cnxk: not in enabled drivers build config 00:01:54.185 mempool/dpaa: not in enabled drivers build config 00:01:54.185 mempool/dpaa2: not in enabled drivers build config 00:01:54.185 mempool/octeontx: not in enabled drivers build config 00:01:54.185 mempool/stack: not in enabled drivers build config 00:01:54.185 dma/cnxk: not in enabled drivers build config 00:01:54.185 dma/dpaa: not in enabled drivers build config 00:01:54.185 dma/dpaa2: not in enabled drivers build config 00:01:54.185 dma/hisilicon: not in enabled drivers build config 00:01:54.185 dma/idxd: not in enabled drivers build config 00:01:54.185 dma/ioat: not in enabled drivers build config 00:01:54.185 dma/skeleton: not in enabled drivers build config 00:01:54.185 net/af_packet: not in enabled drivers build config 00:01:54.185 net/af_xdp: not in enabled drivers build config 00:01:54.185 net/ark: not in enabled drivers build config 00:01:54.185 net/atlantic: not in enabled drivers build config 00:01:54.185 net/avp: not in enabled drivers build config 00:01:54.185 net/axgbe: not in enabled drivers build config 00:01:54.185 net/bnx2x: not in enabled drivers build config 00:01:54.185 net/bnxt: not in enabled drivers build config 00:01:54.185 net/bonding: not in enabled drivers build config 00:01:54.185 net/cnxk: not in enabled drivers build config 00:01:54.185 net/cpfl: not in enabled drivers build config 00:01:54.185 net/cxgbe: not in enabled drivers build config 00:01:54.185 net/dpaa: not in enabled drivers build config 00:01:54.185 net/dpaa2: not in enabled drivers build config 00:01:54.185 net/e1000: not in enabled drivers build config 00:01:54.185 net/ena: not in enabled drivers build config 00:01:54.185 net/enetc: not in enabled drivers build config 00:01:54.185 net/enetfec: not in enabled drivers build config 00:01:54.185 net/enic: not in enabled drivers build config 00:01:54.185 net/failsafe: not in enabled drivers build config 00:01:54.185 net/fm10k: not in enabled drivers build config 00:01:54.185 net/gve: not in enabled drivers build config 00:01:54.185 net/hinic: not in enabled drivers build config 00:01:54.185 net/hns3: not in enabled drivers build config 00:01:54.185 net/i40e: not in enabled drivers build config 00:01:54.185 net/iavf: not in enabled drivers build config 00:01:54.185 net/ice: not in enabled drivers build config 00:01:54.185 net/idpf: not in enabled drivers build config 00:01:54.185 net/igc: not in enabled drivers build config 00:01:54.185 net/ionic: not in enabled drivers build config 00:01:54.185 net/ipn3ke: not in enabled drivers build config 00:01:54.185 net/ixgbe: not in enabled drivers build config 00:01:54.185 net/mana: not in enabled drivers build config 00:01:54.185 net/memif: not in enabled drivers build config 00:01:54.185 net/mlx4: not in enabled drivers build config 00:01:54.185 net/mlx5: not in enabled drivers build config 00:01:54.185 net/mvneta: not in enabled drivers build config 00:01:54.185 net/mvpp2: not in enabled drivers build config 00:01:54.185 net/netvsc: not in enabled drivers build config 00:01:54.185 net/nfb: not in enabled drivers build config 00:01:54.185 net/nfp: not in enabled drivers build config 00:01:54.185 net/ngbe: not in enabled drivers build config 00:01:54.185 net/null: not in enabled drivers build config 00:01:54.185 net/octeontx: not in enabled drivers build config 00:01:54.185 net/octeon_ep: not in enabled drivers build config 00:01:54.185 net/pcap: not in enabled drivers build config 00:01:54.185 net/pfe: not in enabled drivers build config 00:01:54.185 net/qede: not in enabled drivers build config 00:01:54.185 net/ring: not in enabled drivers build config 00:01:54.185 net/sfc: not in enabled drivers build config 00:01:54.185 net/softnic: not in enabled drivers build config 00:01:54.185 net/tap: not in enabled drivers build config 00:01:54.185 net/thunderx: not in enabled drivers build config 00:01:54.185 net/txgbe: not in enabled drivers build config 00:01:54.185 net/vdev_netvsc: not in enabled drivers build config 00:01:54.185 net/vhost: not in enabled drivers build config 00:01:54.185 net/virtio: not in enabled drivers build config 00:01:54.185 net/vmxnet3: not in enabled drivers build config 00:01:54.185 raw/*: missing internal dependency, "rawdev" 00:01:54.185 crypto/armv8: not in enabled drivers build config 00:01:54.185 crypto/bcmfs: not in enabled drivers build config 00:01:54.185 crypto/caam_jr: not in enabled drivers build config 00:01:54.185 crypto/ccp: not in enabled drivers build config 00:01:54.185 crypto/cnxk: not in enabled drivers build config 00:01:54.185 crypto/dpaa_sec: not in enabled drivers build config 00:01:54.185 crypto/dpaa2_sec: not in enabled drivers build config 00:01:54.185 crypto/ipsec_mb: not in enabled drivers build config 00:01:54.185 crypto/mlx5: not in enabled drivers build config 00:01:54.185 crypto/mvsam: not in enabled drivers build config 00:01:54.185 crypto/nitrox: not in enabled drivers build config 00:01:54.185 crypto/null: not in enabled drivers build config 00:01:54.185 crypto/octeontx: not in enabled drivers build config 00:01:54.185 crypto/openssl: not in enabled drivers build config 00:01:54.185 crypto/scheduler: not in enabled drivers build config 00:01:54.185 crypto/uadk: not in enabled drivers build config 00:01:54.185 crypto/virtio: not in enabled drivers build config 00:01:54.185 compress/isal: not in enabled drivers build config 00:01:54.185 compress/mlx5: not in enabled drivers build config 00:01:54.185 compress/octeontx: not in enabled drivers build config 00:01:54.185 compress/zlib: not in enabled drivers build config 00:01:54.185 regex/*: missing internal dependency, "regexdev" 00:01:54.185 ml/*: missing internal dependency, "mldev" 00:01:54.185 vdpa/ifc: not in enabled drivers build config 00:01:54.185 vdpa/mlx5: not in enabled drivers build config 00:01:54.185 vdpa/nfp: not in enabled drivers build config 00:01:54.185 vdpa/sfc: not in enabled drivers build config 00:01:54.185 event/*: missing internal dependency, "eventdev" 00:01:54.185 baseband/*: missing internal dependency, "bbdev" 00:01:54.185 gpu/*: missing internal dependency, "gpudev" 00:01:54.185 00:01:54.185 00:01:54.185 Build targets in project: 85 00:01:54.185 00:01:54.185 DPDK 23.11.0 00:01:54.185 00:01:54.185 User defined options 00:01:54.185 buildtype : debug 00:01:54.185 default_library : shared 00:01:54.185 libdir : lib 00:01:54.185 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:54.185 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:54.185 c_link_args : 00:01:54.185 cpu_instruction_set: native 00:01:54.185 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:54.185 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:54.185 enable_docs : false 00:01:54.185 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:54.185 enable_kmods : false 00:01:54.185 tests : false 00:01:54.185 00:01:54.185 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:54.752 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:01:54.752 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:54.752 [2/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:54.752 [3/265] Linking static target lib/librte_log.a 00:01:54.752 [4/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:54.752 [5/265] Linking static target lib/librte_kvargs.a 00:01:55.010 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:55.010 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:55.010 [8/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:55.010 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:55.010 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:55.269 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.528 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:55.528 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:55.528 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:55.786 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:55.786 [16/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.786 [17/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:55.786 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:55.786 [19/265] Linking target lib/librte_log.so.24.0 00:01:55.786 [20/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:55.786 [21/265] Linking static target lib/librte_telemetry.a 00:01:56.045 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:56.045 [23/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:56.045 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:56.045 [25/265] Linking target lib/librte_kvargs.so.24.0 00:01:56.045 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:56.304 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:56.304 [28/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:56.562 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:56.562 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:56.562 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:56.562 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:56.820 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:56.820 [34/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.820 [35/265] Linking target lib/librte_telemetry.so.24.0 00:01:56.820 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:56.820 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:57.079 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:57.079 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:57.079 [40/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:57.079 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:57.079 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:57.079 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:57.079 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:57.337 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:57.595 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:57.595 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:57.595 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:57.595 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:57.595 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:57.854 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:57.854 [52/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:57.854 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:57.854 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:58.112 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:58.112 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:58.112 [57/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:58.112 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:58.371 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:58.371 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:58.629 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:58.629 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:58.629 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:58.629 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:58.629 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:58.888 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:58.888 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:58.888 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:58.888 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:59.146 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:59.146 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:59.146 [72/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:59.146 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:59.405 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:59.405 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:59.405 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:59.405 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:59.663 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:59.663 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:59.950 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:59.950 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:59.950 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:59.950 [83/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:59.950 [84/265] Linking static target lib/librte_ring.a 00:02:00.208 [85/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:00.208 [86/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:00.467 [87/265] Linking static target lib/librte_eal.a 00:02:00.467 [88/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:00.467 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:00.467 [90/265] Linking static target lib/librte_rcu.a 00:02:00.726 [91/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.726 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:00.726 [93/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:00.726 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:00.985 [95/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:00.985 [96/265] Linking static target lib/librte_mempool.a 00:02:01.244 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:01.244 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:01.244 [99/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.244 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:01.244 [101/265] Linking static target lib/librte_mbuf.a 00:02:01.503 [102/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:01.503 [103/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:01.503 [104/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:01.762 [105/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:01.762 [106/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:01.762 [107/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:02.021 [108/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:02.021 [109/265] Linking static target lib/librte_net.a 00:02:02.021 [110/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:02.021 [111/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.021 [112/265] Linking static target lib/librte_meter.a 00:02:02.589 [113/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.589 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:02.589 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:02.589 [116/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.589 [117/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.589 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:02.847 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:03.415 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:03.674 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:03.674 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:03.932 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:03.932 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:03.932 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:03.932 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:03.932 [127/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:03.932 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:03.932 [129/265] Linking static target lib/librte_pci.a 00:02:03.932 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:04.191 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:04.191 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:04.191 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:04.450 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:04.450 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:04.450 [136/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.450 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:04.450 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:04.450 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:04.450 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:04.450 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:04.708 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:04.708 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:04.708 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:04.966 [145/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:04.966 [146/265] Linking static target lib/librte_ethdev.a 00:02:04.966 [147/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:04.966 [148/265] Linking static target lib/librte_cmdline.a 00:02:05.226 [149/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:05.226 [150/265] Linking static target lib/librte_timer.a 00:02:05.226 [151/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:05.226 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:05.226 [153/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:05.226 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:05.484 [155/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:05.484 [156/265] Linking static target lib/librte_compressdev.a 00:02:05.484 [157/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:05.484 [158/265] Linking static target lib/librte_hash.a 00:02:05.743 [159/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.743 [160/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:05.743 [161/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:06.001 [162/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:06.001 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:06.001 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:06.276 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:06.276 [166/265] Linking static target lib/librte_dmadev.a 00:02:06.534 [167/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:06.534 [168/265] Linking static target lib/librte_cryptodev.a 00:02:06.534 [169/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:06.534 [170/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:06.534 [171/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.534 [172/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:06.792 [173/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.792 [174/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.792 [175/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:07.051 [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.051 [177/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:07.051 [178/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:07.051 [179/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:07.051 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:07.309 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:07.567 [182/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:07.567 [183/265] Linking static target lib/librte_reorder.a 00:02:07.567 [184/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:07.567 [185/265] Linking static target lib/librte_power.a 00:02:07.825 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:07.825 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:07.825 [188/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:07.825 [189/265] Linking static target lib/librte_security.a 00:02:08.083 [190/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.083 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:08.341 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:08.600 [193/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.859 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:08.859 [195/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:08.859 [196/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:08.859 [197/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.117 [198/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.117 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:09.376 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:09.376 [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:09.376 [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:09.376 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:09.634 [204/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:09.634 [205/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:09.634 [206/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:09.634 [207/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:09.892 [208/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:09.892 [209/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:09.892 [210/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:09.892 [211/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:09.892 [212/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:09.892 [213/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:09.892 [214/265] Linking static target drivers/librte_bus_vdev.a 00:02:09.892 [215/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:09.892 [216/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:10.152 [217/265] Linking static target drivers/librte_bus_pci.a 00:02:10.152 [218/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:10.152 [219/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:10.152 [220/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.152 [221/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:10.412 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:10.412 [223/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:10.412 [224/265] Linking static target drivers/librte_mempool_ring.a 00:02:10.412 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.980 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:10.980 [227/265] Linking static target lib/librte_vhost.a 00:02:12.001 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.001 [229/265] Linking target lib/librte_eal.so.24.0 00:02:12.279 [230/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:12.279 [231/265] Linking target lib/librte_ring.so.24.0 00:02:12.279 [232/265] Linking target lib/librte_timer.so.24.0 00:02:12.279 [233/265] Linking target lib/librte_dmadev.so.24.0 00:02:12.279 [234/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:12.279 [235/265] Linking target lib/librte_meter.so.24.0 00:02:12.279 [236/265] Linking target lib/librte_pci.so.24.0 00:02:12.537 [237/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:12.537 [238/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:12.537 [239/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:12.537 [240/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:12.537 [241/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:12.537 [242/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.537 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:12.537 [244/265] Linking target lib/librte_rcu.so.24.0 00:02:12.537 [245/265] Linking target lib/librte_mempool.so.24.0 00:02:12.537 [246/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:12.796 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:12.796 [248/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.796 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:12.796 [250/265] Linking target lib/librte_mbuf.so.24.0 00:02:12.796 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:13.055 [252/265] Linking target lib/librte_compressdev.so.24.0 00:02:13.055 [253/265] Linking target lib/librte_reorder.so.24.0 00:02:13.055 [254/265] Linking target lib/librte_net.so.24.0 00:02:13.055 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:02:13.055 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:13.055 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:13.055 [258/265] Linking target lib/librte_hash.so.24.0 00:02:13.055 [259/265] Linking target lib/librte_cmdline.so.24.0 00:02:13.055 [260/265] Linking target lib/librte_security.so.24.0 00:02:13.055 [261/265] Linking target lib/librte_ethdev.so.24.0 00:02:13.313 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:13.313 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:13.313 [264/265] Linking target lib/librte_power.so.24.0 00:02:13.313 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:13.313 INFO: autodetecting backend as ninja 00:02:13.313 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:14.689 CC lib/ut_mock/mock.o 00:02:14.689 CC lib/log/log.o 00:02:14.689 CC lib/log/log_flags.o 00:02:14.689 CC lib/log/log_deprecated.o 00:02:14.689 CC lib/ut/ut.o 00:02:14.689 LIB libspdk_ut_mock.a 00:02:14.689 LIB libspdk_log.a 00:02:14.689 LIB libspdk_ut.a 00:02:14.689 SO libspdk_ut_mock.so.6.0 00:02:14.689 SO libspdk_ut.so.2.0 00:02:14.689 SO libspdk_log.so.7.0 00:02:14.689 SYMLINK libspdk_ut_mock.so 00:02:14.689 SYMLINK libspdk_ut.so 00:02:14.948 SYMLINK libspdk_log.so 00:02:14.948 CC lib/ioat/ioat.o 00:02:14.948 CXX lib/trace_parser/trace.o 00:02:14.948 CC lib/dma/dma.o 00:02:14.948 CC lib/util/bit_array.o 00:02:14.948 CC lib/util/base64.o 00:02:14.948 CC lib/util/cpuset.o 00:02:14.948 CC lib/util/crc16.o 00:02:14.948 CC lib/util/crc32.o 00:02:14.948 CC lib/util/crc32c.o 00:02:14.948 CC lib/vfio_user/host/vfio_user_pci.o 00:02:15.206 CC lib/util/crc32_ieee.o 00:02:15.206 CC lib/util/crc64.o 00:02:15.206 CC lib/util/dif.o 00:02:15.206 LIB libspdk_dma.a 00:02:15.206 CC lib/util/fd.o 00:02:15.206 SO libspdk_dma.so.4.0 00:02:15.206 CC lib/util/file.o 00:02:15.206 CC lib/util/hexlify.o 00:02:15.206 LIB libspdk_ioat.a 00:02:15.206 CC lib/util/iov.o 00:02:15.206 CC lib/util/math.o 00:02:15.207 SYMLINK libspdk_dma.so 00:02:15.465 CC lib/vfio_user/host/vfio_user.o 00:02:15.465 SO libspdk_ioat.so.7.0 00:02:15.465 CC lib/util/pipe.o 00:02:15.465 CC lib/util/strerror_tls.o 00:02:15.465 SYMLINK libspdk_ioat.so 00:02:15.465 CC lib/util/string.o 00:02:15.465 CC lib/util/uuid.o 00:02:15.465 CC lib/util/fd_group.o 00:02:15.465 CC lib/util/xor.o 00:02:15.465 CC lib/util/zipf.o 00:02:15.465 LIB libspdk_vfio_user.a 00:02:15.724 SO libspdk_vfio_user.so.5.0 00:02:15.724 SYMLINK libspdk_vfio_user.so 00:02:15.724 LIB libspdk_util.a 00:02:15.982 SO libspdk_util.so.9.0 00:02:15.982 SYMLINK libspdk_util.so 00:02:15.982 LIB libspdk_trace_parser.a 00:02:15.982 SO libspdk_trace_parser.so.5.0 00:02:16.240 CC lib/conf/conf.o 00:02:16.240 CC lib/idxd/idxd.o 00:02:16.240 CC lib/vmd/vmd.o 00:02:16.240 CC lib/idxd/idxd_user.o 00:02:16.240 CC lib/vmd/led.o 00:02:16.240 CC lib/env_dpdk/env.o 00:02:16.240 CC lib/rdma/common.o 00:02:16.240 CC lib/env_dpdk/memory.o 00:02:16.240 CC lib/json/json_parse.o 00:02:16.240 SYMLINK libspdk_trace_parser.so 00:02:16.240 CC lib/json/json_util.o 00:02:16.240 CC lib/json/json_write.o 00:02:16.498 LIB libspdk_conf.a 00:02:16.498 CC lib/rdma/rdma_verbs.o 00:02:16.498 CC lib/env_dpdk/pci.o 00:02:16.498 SO libspdk_conf.so.6.0 00:02:16.498 CC lib/env_dpdk/init.o 00:02:16.498 SYMLINK libspdk_conf.so 00:02:16.498 CC lib/env_dpdk/threads.o 00:02:16.498 CC lib/env_dpdk/pci_ioat.o 00:02:16.498 LIB libspdk_rdma.a 00:02:16.498 LIB libspdk_json.a 00:02:16.498 CC lib/env_dpdk/pci_virtio.o 00:02:16.757 SO libspdk_rdma.so.6.0 00:02:16.757 SO libspdk_json.so.6.0 00:02:16.757 CC lib/env_dpdk/pci_vmd.o 00:02:16.757 LIB libspdk_idxd.a 00:02:16.757 SYMLINK libspdk_rdma.so 00:02:16.757 CC lib/env_dpdk/pci_idxd.o 00:02:16.757 SYMLINK libspdk_json.so 00:02:16.757 SO libspdk_idxd.so.12.0 00:02:16.757 CC lib/env_dpdk/pci_event.o 00:02:16.757 CC lib/env_dpdk/sigbus_handler.o 00:02:16.757 SYMLINK libspdk_idxd.so 00:02:16.757 CC lib/env_dpdk/pci_dpdk.o 00:02:16.757 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:16.757 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:16.757 LIB libspdk_vmd.a 00:02:17.016 SO libspdk_vmd.so.6.0 00:02:17.016 CC lib/jsonrpc/jsonrpc_server.o 00:02:17.016 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:17.016 CC lib/jsonrpc/jsonrpc_client.o 00:02:17.016 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:17.016 SYMLINK libspdk_vmd.so 00:02:17.275 LIB libspdk_jsonrpc.a 00:02:17.275 SO libspdk_jsonrpc.so.6.0 00:02:17.275 SYMLINK libspdk_jsonrpc.so 00:02:17.534 CC lib/rpc/rpc.o 00:02:17.534 LIB libspdk_env_dpdk.a 00:02:17.793 SO libspdk_env_dpdk.so.14.0 00:02:17.793 LIB libspdk_rpc.a 00:02:17.793 SO libspdk_rpc.so.6.0 00:02:17.793 SYMLINK libspdk_env_dpdk.so 00:02:17.793 SYMLINK libspdk_rpc.so 00:02:18.052 CC lib/trace/trace.o 00:02:18.052 CC lib/trace/trace_flags.o 00:02:18.052 CC lib/trace/trace_rpc.o 00:02:18.052 CC lib/notify/notify.o 00:02:18.052 CC lib/notify/notify_rpc.o 00:02:18.052 CC lib/sock/sock.o 00:02:18.052 CC lib/sock/sock_rpc.o 00:02:18.311 LIB libspdk_notify.a 00:02:18.311 LIB libspdk_trace.a 00:02:18.311 SO libspdk_notify.so.6.0 00:02:18.311 SO libspdk_trace.so.10.0 00:02:18.311 SYMLINK libspdk_notify.so 00:02:18.311 SYMLINK libspdk_trace.so 00:02:18.570 LIB libspdk_sock.a 00:02:18.570 SO libspdk_sock.so.9.0 00:02:18.570 CC lib/thread/thread.o 00:02:18.570 CC lib/thread/iobuf.o 00:02:18.570 SYMLINK libspdk_sock.so 00:02:18.829 CC lib/nvme/nvme_ctrlr.o 00:02:18.829 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:18.829 CC lib/nvme/nvme_fabric.o 00:02:18.829 CC lib/nvme/nvme_ns_cmd.o 00:02:18.829 CC lib/nvme/nvme_ns.o 00:02:18.829 CC lib/nvme/nvme_pcie.o 00:02:18.829 CC lib/nvme/nvme_pcie_common.o 00:02:18.829 CC lib/nvme/nvme_qpair.o 00:02:18.829 CC lib/nvme/nvme.o 00:02:19.397 CC lib/nvme/nvme_quirks.o 00:02:19.656 CC lib/nvme/nvme_transport.o 00:02:19.656 CC lib/nvme/nvme_discovery.o 00:02:19.656 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:19.656 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:19.656 CC lib/nvme/nvme_tcp.o 00:02:19.656 CC lib/nvme/nvme_opal.o 00:02:19.656 CC lib/nvme/nvme_io_msg.o 00:02:19.914 LIB libspdk_thread.a 00:02:19.914 SO libspdk_thread.so.10.0 00:02:19.914 CC lib/nvme/nvme_poll_group.o 00:02:19.914 SYMLINK libspdk_thread.so 00:02:20.173 CC lib/nvme/nvme_zns.o 00:02:20.173 CC lib/nvme/nvme_cuse.o 00:02:20.173 CC lib/nvme/nvme_vfio_user.o 00:02:20.173 CC lib/nvme/nvme_rdma.o 00:02:20.432 CC lib/accel/accel.o 00:02:20.432 CC lib/blob/blobstore.o 00:02:20.432 CC lib/init/json_config.o 00:02:20.691 CC lib/blob/request.o 00:02:20.691 CC lib/accel/accel_rpc.o 00:02:20.691 CC lib/blob/zeroes.o 00:02:20.691 CC lib/init/subsystem.o 00:02:20.949 CC lib/accel/accel_sw.o 00:02:20.949 CC lib/blob/blob_bs_dev.o 00:02:20.949 CC lib/init/subsystem_rpc.o 00:02:20.949 CC lib/init/rpc.o 00:02:20.949 CC lib/virtio/virtio.o 00:02:20.949 CC lib/vfu_tgt/tgt_endpoint.o 00:02:21.208 CC lib/vfu_tgt/tgt_rpc.o 00:02:21.208 CC lib/virtio/virtio_vhost_user.o 00:02:21.208 CC lib/virtio/virtio_vfio_user.o 00:02:21.208 LIB libspdk_init.a 00:02:21.208 CC lib/virtio/virtio_pci.o 00:02:21.208 SO libspdk_init.so.5.0 00:02:21.208 SYMLINK libspdk_init.so 00:02:21.467 LIB libspdk_vfu_tgt.a 00:02:21.467 LIB libspdk_accel.a 00:02:21.467 CC lib/event/app.o 00:02:21.467 CC lib/event/reactor.o 00:02:21.467 CC lib/event/log_rpc.o 00:02:21.467 CC lib/event/app_rpc.o 00:02:21.467 SO libspdk_vfu_tgt.so.3.0 00:02:21.467 SO libspdk_accel.so.15.0 00:02:21.467 CC lib/event/scheduler_static.o 00:02:21.467 SYMLINK libspdk_vfu_tgt.so 00:02:21.467 LIB libspdk_virtio.a 00:02:21.467 SYMLINK libspdk_accel.so 00:02:21.467 SO libspdk_virtio.so.7.0 00:02:21.467 LIB libspdk_nvme.a 00:02:21.726 SYMLINK libspdk_virtio.so 00:02:21.726 CC lib/bdev/bdev.o 00:02:21.726 CC lib/bdev/bdev_rpc.o 00:02:21.726 CC lib/bdev/part.o 00:02:21.726 CC lib/bdev/bdev_zone.o 00:02:21.726 CC lib/bdev/scsi_nvme.o 00:02:21.726 SO libspdk_nvme.so.13.0 00:02:21.726 LIB libspdk_event.a 00:02:21.984 SO libspdk_event.so.13.0 00:02:21.984 SYMLINK libspdk_event.so 00:02:21.984 SYMLINK libspdk_nvme.so 00:02:23.361 LIB libspdk_blob.a 00:02:23.361 SO libspdk_blob.so.11.0 00:02:23.361 SYMLINK libspdk_blob.so 00:02:23.361 CC lib/lvol/lvol.o 00:02:23.361 CC lib/blobfs/blobfs.o 00:02:23.361 CC lib/blobfs/tree.o 00:02:23.928 LIB libspdk_bdev.a 00:02:24.187 SO libspdk_bdev.so.15.0 00:02:24.187 SYMLINK libspdk_bdev.so 00:02:24.187 LIB libspdk_lvol.a 00:02:24.187 LIB libspdk_blobfs.a 00:02:24.445 SO libspdk_lvol.so.10.0 00:02:24.445 SO libspdk_blobfs.so.10.0 00:02:24.445 CC lib/scsi/dev.o 00:02:24.445 CC lib/ublk/ublk_rpc.o 00:02:24.445 CC lib/ublk/ublk.o 00:02:24.445 CC lib/nvmf/ctrlr.o 00:02:24.445 CC lib/nvmf/ctrlr_discovery.o 00:02:24.445 CC lib/nvmf/ctrlr_bdev.o 00:02:24.445 CC lib/nbd/nbd.o 00:02:24.445 CC lib/ftl/ftl_core.o 00:02:24.445 SYMLINK libspdk_lvol.so 00:02:24.445 CC lib/nvmf/subsystem.o 00:02:24.445 SYMLINK libspdk_blobfs.so 00:02:24.445 CC lib/nvmf/nvmf.o 00:02:24.704 CC lib/nvmf/nvmf_rpc.o 00:02:24.704 CC lib/scsi/lun.o 00:02:24.704 CC lib/nbd/nbd_rpc.o 00:02:24.704 CC lib/ftl/ftl_init.o 00:02:24.962 CC lib/nvmf/transport.o 00:02:24.962 CC lib/scsi/port.o 00:02:24.962 LIB libspdk_nbd.a 00:02:24.962 LIB libspdk_ublk.a 00:02:24.962 SO libspdk_nbd.so.7.0 00:02:24.962 CC lib/ftl/ftl_layout.o 00:02:24.962 SO libspdk_ublk.so.3.0 00:02:24.962 CC lib/scsi/scsi.o 00:02:24.962 CC lib/nvmf/tcp.o 00:02:24.962 SYMLINK libspdk_nbd.so 00:02:25.220 SYMLINK libspdk_ublk.so 00:02:25.220 CC lib/scsi/scsi_bdev.o 00:02:25.220 CC lib/scsi/scsi_pr.o 00:02:25.220 CC lib/scsi/scsi_rpc.o 00:02:25.220 CC lib/scsi/task.o 00:02:25.220 CC lib/ftl/ftl_debug.o 00:02:25.479 CC lib/ftl/ftl_io.o 00:02:25.479 CC lib/ftl/ftl_sb.o 00:02:25.479 CC lib/nvmf/vfio_user.o 00:02:25.479 CC lib/nvmf/rdma.o 00:02:25.479 CC lib/ftl/ftl_l2p.o 00:02:25.479 CC lib/ftl/ftl_l2p_flat.o 00:02:25.479 CC lib/ftl/ftl_nv_cache.o 00:02:25.479 LIB libspdk_scsi.a 00:02:25.737 CC lib/ftl/ftl_band.o 00:02:25.737 CC lib/ftl/ftl_band_ops.o 00:02:25.737 SO libspdk_scsi.so.9.0 00:02:25.737 SYMLINK libspdk_scsi.so 00:02:25.737 CC lib/ftl/ftl_writer.o 00:02:25.737 CC lib/ftl/ftl_rq.o 00:02:25.996 CC lib/iscsi/conn.o 00:02:25.996 CC lib/ftl/ftl_reloc.o 00:02:25.996 CC lib/iscsi/init_grp.o 00:02:25.996 CC lib/iscsi/iscsi.o 00:02:25.996 CC lib/ftl/ftl_l2p_cache.o 00:02:25.996 CC lib/iscsi/md5.o 00:02:26.254 CC lib/iscsi/param.o 00:02:26.254 CC lib/ftl/ftl_p2l.o 00:02:26.512 CC lib/vhost/vhost.o 00:02:26.512 CC lib/iscsi/portal_grp.o 00:02:26.512 CC lib/vhost/vhost_rpc.o 00:02:26.512 CC lib/ftl/ftl_trace.o 00:02:26.512 CC lib/iscsi/tgt_node.o 00:02:26.512 CC lib/ftl/mngt/ftl_mngt.o 00:02:26.771 CC lib/vhost/vhost_scsi.o 00:02:26.771 CC lib/iscsi/iscsi_subsystem.o 00:02:26.771 CC lib/iscsi/iscsi_rpc.o 00:02:27.029 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:27.029 CC lib/iscsi/task.o 00:02:27.029 CC lib/vhost/vhost_blk.o 00:02:27.029 CC lib/vhost/rte_vhost_user.o 00:02:27.029 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:27.288 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:27.288 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:27.288 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:27.288 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:27.288 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:27.288 LIB libspdk_iscsi.a 00:02:27.288 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:27.547 SO libspdk_iscsi.so.8.0 00:02:27.547 LIB libspdk_nvmf.a 00:02:27.547 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:27.547 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:27.547 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:27.547 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:27.547 SO libspdk_nvmf.so.18.0 00:02:27.547 SYMLINK libspdk_iscsi.so 00:02:27.547 CC lib/ftl/utils/ftl_conf.o 00:02:27.547 CC lib/ftl/utils/ftl_md.o 00:02:27.547 CC lib/ftl/utils/ftl_mempool.o 00:02:27.806 CC lib/ftl/utils/ftl_bitmap.o 00:02:27.806 CC lib/ftl/utils/ftl_property.o 00:02:27.806 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:27.806 SYMLINK libspdk_nvmf.so 00:02:27.806 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:27.806 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:27.806 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:27.806 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:27.806 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:28.065 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:28.065 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:28.065 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:28.065 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:28.065 CC lib/ftl/base/ftl_base_dev.o 00:02:28.065 CC lib/ftl/base/ftl_base_bdev.o 00:02:28.065 LIB libspdk_vhost.a 00:02:28.323 SO libspdk_vhost.so.8.0 00:02:28.323 LIB libspdk_ftl.a 00:02:28.323 SYMLINK libspdk_vhost.so 00:02:28.582 SO libspdk_ftl.so.9.0 00:02:28.841 SYMLINK libspdk_ftl.so 00:02:29.100 CC module/env_dpdk/env_dpdk_rpc.o 00:02:29.100 CC module/vfu_device/vfu_virtio.o 00:02:29.100 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:29.359 CC module/scheduler/gscheduler/gscheduler.o 00:02:29.359 CC module/sock/posix/posix.o 00:02:29.359 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:29.359 CC module/accel/error/accel_error.o 00:02:29.359 CC module/accel/dsa/accel_dsa.o 00:02:29.359 CC module/blob/bdev/blob_bdev.o 00:02:29.359 CC module/accel/ioat/accel_ioat.o 00:02:29.359 LIB libspdk_env_dpdk_rpc.a 00:02:29.359 SO libspdk_env_dpdk_rpc.so.6.0 00:02:29.359 LIB libspdk_scheduler_gscheduler.a 00:02:29.359 SYMLINK libspdk_env_dpdk_rpc.so 00:02:29.359 CC module/accel/ioat/accel_ioat_rpc.o 00:02:29.359 SO libspdk_scheduler_gscheduler.so.4.0 00:02:29.359 LIB libspdk_scheduler_dpdk_governor.a 00:02:29.359 LIB libspdk_scheduler_dynamic.a 00:02:29.359 CC module/accel/error/accel_error_rpc.o 00:02:29.359 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:29.359 SO libspdk_scheduler_dynamic.so.4.0 00:02:29.618 SYMLINK libspdk_scheduler_gscheduler.so 00:02:29.618 CC module/accel/dsa/accel_dsa_rpc.o 00:02:29.618 CC module/vfu_device/vfu_virtio_blk.o 00:02:29.618 CC module/vfu_device/vfu_virtio_scsi.o 00:02:29.618 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:29.618 SYMLINK libspdk_scheduler_dynamic.so 00:02:29.618 CC module/vfu_device/vfu_virtio_rpc.o 00:02:29.618 LIB libspdk_blob_bdev.a 00:02:29.618 LIB libspdk_accel_ioat.a 00:02:29.618 SO libspdk_blob_bdev.so.11.0 00:02:29.618 SO libspdk_accel_ioat.so.6.0 00:02:29.618 LIB libspdk_accel_error.a 00:02:29.618 SO libspdk_accel_error.so.2.0 00:02:29.618 SYMLINK libspdk_blob_bdev.so 00:02:29.618 CC module/accel/iaa/accel_iaa.o 00:02:29.618 SYMLINK libspdk_accel_ioat.so 00:02:29.618 LIB libspdk_accel_dsa.a 00:02:29.618 SO libspdk_accel_dsa.so.5.0 00:02:29.618 SYMLINK libspdk_accel_error.so 00:02:29.618 CC module/accel/iaa/accel_iaa_rpc.o 00:02:29.877 SYMLINK libspdk_accel_dsa.so 00:02:29.877 CC module/bdev/delay/vbdev_delay.o 00:02:29.877 CC module/blobfs/bdev/blobfs_bdev.o 00:02:29.877 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:29.877 LIB libspdk_vfu_device.a 00:02:29.877 CC module/bdev/error/vbdev_error.o 00:02:29.877 LIB libspdk_accel_iaa.a 00:02:29.877 CC module/bdev/lvol/vbdev_lvol.o 00:02:29.877 CC module/bdev/gpt/gpt.o 00:02:29.877 SO libspdk_accel_iaa.so.3.0 00:02:29.877 SO libspdk_vfu_device.so.3.0 00:02:29.877 CC module/bdev/malloc/bdev_malloc.o 00:02:29.877 LIB libspdk_sock_posix.a 00:02:29.877 SO libspdk_sock_posix.so.6.0 00:02:29.877 SYMLINK libspdk_accel_iaa.so 00:02:29.877 CC module/bdev/gpt/vbdev_gpt.o 00:02:30.136 SYMLINK libspdk_vfu_device.so 00:02:30.136 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:30.136 LIB libspdk_blobfs_bdev.a 00:02:30.136 SYMLINK libspdk_sock_posix.so 00:02:30.136 SO libspdk_blobfs_bdev.so.6.0 00:02:30.136 CC module/bdev/null/bdev_null.o 00:02:30.136 SYMLINK libspdk_blobfs_bdev.so 00:02:30.136 CC module/bdev/error/vbdev_error_rpc.o 00:02:30.136 CC module/bdev/nvme/bdev_nvme.o 00:02:30.136 CC module/bdev/passthru/vbdev_passthru.o 00:02:30.136 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:30.136 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:30.136 LIB libspdk_bdev_gpt.a 00:02:30.136 CC module/bdev/raid/bdev_raid.o 00:02:30.396 SO libspdk_bdev_gpt.so.6.0 00:02:30.396 LIB libspdk_bdev_malloc.a 00:02:30.396 SO libspdk_bdev_malloc.so.6.0 00:02:30.396 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:30.396 SYMLINK libspdk_bdev_gpt.so 00:02:30.396 LIB libspdk_bdev_error.a 00:02:30.396 CC module/bdev/raid/bdev_raid_rpc.o 00:02:30.396 SYMLINK libspdk_bdev_malloc.so 00:02:30.396 SO libspdk_bdev_error.so.6.0 00:02:30.396 CC module/bdev/raid/bdev_raid_sb.o 00:02:30.396 LIB libspdk_bdev_delay.a 00:02:30.396 CC module/bdev/null/bdev_null_rpc.o 00:02:30.396 SO libspdk_bdev_delay.so.6.0 00:02:30.396 SYMLINK libspdk_bdev_error.so 00:02:30.396 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:30.396 CC module/bdev/split/vbdev_split.o 00:02:30.396 LIB libspdk_bdev_passthru.a 00:02:30.396 SYMLINK libspdk_bdev_delay.so 00:02:30.396 CC module/bdev/nvme/nvme_rpc.o 00:02:30.655 SO libspdk_bdev_passthru.so.6.0 00:02:30.655 LIB libspdk_bdev_null.a 00:02:30.655 SYMLINK libspdk_bdev_passthru.so 00:02:30.655 CC module/bdev/nvme/bdev_mdns_client.o 00:02:30.655 CC module/bdev/nvme/vbdev_opal.o 00:02:30.655 SO libspdk_bdev_null.so.6.0 00:02:30.655 LIB libspdk_bdev_lvol.a 00:02:30.655 CC module/bdev/raid/raid0.o 00:02:30.655 SO libspdk_bdev_lvol.so.6.0 00:02:30.655 SYMLINK libspdk_bdev_null.so 00:02:30.655 CC module/bdev/raid/raid1.o 00:02:30.655 CC module/bdev/split/vbdev_split_rpc.o 00:02:30.655 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:30.655 SYMLINK libspdk_bdev_lvol.so 00:02:30.914 CC module/bdev/raid/concat.o 00:02:30.914 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:30.914 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:30.914 LIB libspdk_bdev_split.a 00:02:30.914 SO libspdk_bdev_split.so.6.0 00:02:30.914 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:30.914 CC module/bdev/aio/bdev_aio.o 00:02:30.914 SYMLINK libspdk_bdev_split.so 00:02:30.914 CC module/bdev/aio/bdev_aio_rpc.o 00:02:31.174 CC module/bdev/ftl/bdev_ftl.o 00:02:31.174 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:31.174 LIB libspdk_bdev_raid.a 00:02:31.174 SO libspdk_bdev_raid.so.6.0 00:02:31.174 LIB libspdk_bdev_zone_block.a 00:02:31.174 CC module/bdev/iscsi/bdev_iscsi.o 00:02:31.174 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:31.174 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:31.174 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:31.174 SO libspdk_bdev_zone_block.so.6.0 00:02:31.434 SYMLINK libspdk_bdev_raid.so 00:02:31.434 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:31.434 SYMLINK libspdk_bdev_zone_block.so 00:02:31.434 LIB libspdk_bdev_aio.a 00:02:31.434 LIB libspdk_bdev_ftl.a 00:02:31.434 SO libspdk_bdev_aio.so.6.0 00:02:31.434 SO libspdk_bdev_ftl.so.6.0 00:02:31.434 SYMLINK libspdk_bdev_aio.so 00:02:31.434 SYMLINK libspdk_bdev_ftl.so 00:02:31.713 LIB libspdk_bdev_iscsi.a 00:02:31.713 SO libspdk_bdev_iscsi.so.6.0 00:02:31.713 SYMLINK libspdk_bdev_iscsi.so 00:02:31.713 LIB libspdk_bdev_virtio.a 00:02:31.988 SO libspdk_bdev_virtio.so.6.0 00:02:31.988 SYMLINK libspdk_bdev_virtio.so 00:02:32.565 LIB libspdk_bdev_nvme.a 00:02:32.565 SO libspdk_bdev_nvme.so.7.0 00:02:32.565 SYMLINK libspdk_bdev_nvme.so 00:02:33.133 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:33.133 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:33.133 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:33.133 CC module/event/subsystems/iobuf/iobuf.o 00:02:33.133 CC module/event/subsystems/sock/sock.o 00:02:33.133 CC module/event/subsystems/vmd/vmd.o 00:02:33.133 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:33.133 CC module/event/subsystems/scheduler/scheduler.o 00:02:33.133 LIB libspdk_event_sock.a 00:02:33.133 LIB libspdk_event_vhost_blk.a 00:02:33.133 LIB libspdk_event_vfu_tgt.a 00:02:33.133 LIB libspdk_event_iobuf.a 00:02:33.133 SO libspdk_event_sock.so.5.0 00:02:33.133 LIB libspdk_event_scheduler.a 00:02:33.133 SO libspdk_event_vhost_blk.so.3.0 00:02:33.133 SO libspdk_event_vfu_tgt.so.3.0 00:02:33.133 LIB libspdk_event_vmd.a 00:02:33.133 SO libspdk_event_scheduler.so.4.0 00:02:33.133 SO libspdk_event_iobuf.so.3.0 00:02:33.133 SYMLINK libspdk_event_sock.so 00:02:33.133 SO libspdk_event_vmd.so.6.0 00:02:33.133 SYMLINK libspdk_event_vfu_tgt.so 00:02:33.392 SYMLINK libspdk_event_vhost_blk.so 00:02:33.392 SYMLINK libspdk_event_scheduler.so 00:02:33.392 SYMLINK libspdk_event_iobuf.so 00:02:33.392 SYMLINK libspdk_event_vmd.so 00:02:33.392 CC module/event/subsystems/accel/accel.o 00:02:33.652 LIB libspdk_event_accel.a 00:02:33.652 SO libspdk_event_accel.so.6.0 00:02:33.652 SYMLINK libspdk_event_accel.so 00:02:33.910 CC module/event/subsystems/bdev/bdev.o 00:02:34.169 LIB libspdk_event_bdev.a 00:02:34.169 SO libspdk_event_bdev.so.6.0 00:02:34.169 SYMLINK libspdk_event_bdev.so 00:02:34.428 CC module/event/subsystems/scsi/scsi.o 00:02:34.428 CC module/event/subsystems/ublk/ublk.o 00:02:34.428 CC module/event/subsystems/nbd/nbd.o 00:02:34.428 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:34.428 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:34.428 LIB libspdk_event_scsi.a 00:02:34.428 LIB libspdk_event_nbd.a 00:02:34.428 LIB libspdk_event_ublk.a 00:02:34.687 SO libspdk_event_scsi.so.6.0 00:02:34.687 SO libspdk_event_nbd.so.6.0 00:02:34.687 SO libspdk_event_ublk.so.3.0 00:02:34.687 SYMLINK libspdk_event_scsi.so 00:02:34.687 SYMLINK libspdk_event_nbd.so 00:02:34.687 SYMLINK libspdk_event_ublk.so 00:02:34.687 LIB libspdk_event_nvmf.a 00:02:34.687 SO libspdk_event_nvmf.so.6.0 00:02:34.687 SYMLINK libspdk_event_nvmf.so 00:02:34.687 CC module/event/subsystems/iscsi/iscsi.o 00:02:34.687 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:34.946 LIB libspdk_event_vhost_scsi.a 00:02:34.946 LIB libspdk_event_iscsi.a 00:02:34.946 SO libspdk_event_vhost_scsi.so.3.0 00:02:34.946 SO libspdk_event_iscsi.so.6.0 00:02:34.946 SYMLINK libspdk_event_vhost_scsi.so 00:02:35.205 SYMLINK libspdk_event_iscsi.so 00:02:35.205 SO libspdk.so.6.0 00:02:35.205 SYMLINK libspdk.so 00:02:35.464 CC app/trace_record/trace_record.o 00:02:35.464 CXX app/trace/trace.o 00:02:35.464 CC app/nvmf_tgt/nvmf_main.o 00:02:35.464 CC examples/ioat/perf/perf.o 00:02:35.464 CC examples/accel/perf/accel_perf.o 00:02:35.464 CC examples/nvme/hello_world/hello_world.o 00:02:35.464 CC examples/sock/hello_world/hello_sock.o 00:02:35.464 CC examples/bdev/hello_world/hello_bdev.o 00:02:35.464 CC test/accel/dif/dif.o 00:02:35.464 CC examples/blob/hello_world/hello_blob.o 00:02:35.722 LINK nvmf_tgt 00:02:35.722 LINK spdk_trace_record 00:02:35.722 LINK ioat_perf 00:02:35.722 LINK hello_world 00:02:35.722 LINK hello_bdev 00:02:35.722 LINK hello_sock 00:02:35.722 LINK spdk_trace 00:02:35.722 LINK hello_blob 00:02:35.980 LINK dif 00:02:35.980 LINK accel_perf 00:02:35.980 CC examples/nvme/reconnect/reconnect.o 00:02:35.981 CC examples/ioat/verify/verify.o 00:02:35.981 CC examples/blob/cli/blobcli.o 00:02:35.981 CC examples/vmd/lsvmd/lsvmd.o 00:02:35.981 CC examples/bdev/bdevperf/bdevperf.o 00:02:36.239 CC examples/vmd/led/led.o 00:02:36.239 CC examples/nvmf/nvmf/nvmf.o 00:02:36.239 CC app/iscsi_tgt/iscsi_tgt.o 00:02:36.239 LINK verify 00:02:36.239 LINK lsvmd 00:02:36.239 CC examples/util/zipf/zipf.o 00:02:36.239 LINK led 00:02:36.239 LINK reconnect 00:02:36.239 CC test/app/bdev_svc/bdev_svc.o 00:02:36.498 LINK iscsi_tgt 00:02:36.498 LINK zipf 00:02:36.498 LINK blobcli 00:02:36.498 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:36.498 LINK nvmf 00:02:36.498 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:36.498 LINK bdev_svc 00:02:36.498 CC examples/nvme/arbitration/arbitration.o 00:02:36.498 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:36.498 CC examples/nvme/hotplug/hotplug.o 00:02:36.756 CC app/spdk_tgt/spdk_tgt.o 00:02:36.756 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:36.756 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:36.756 CC examples/thread/thread/thread_ex.o 00:02:36.756 LINK bdevperf 00:02:36.756 LINK nvme_fuzz 00:02:36.756 LINK hotplug 00:02:36.756 LINK arbitration 00:02:37.014 LINK cmb_copy 00:02:37.014 LINK spdk_tgt 00:02:37.014 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:37.014 LINK nvme_manage 00:02:37.014 LINK thread 00:02:37.014 CC test/app/jsoncat/jsoncat.o 00:02:37.014 CC test/app/histogram_perf/histogram_perf.o 00:02:37.014 CC test/app/stub/stub.o 00:02:37.014 CC app/spdk_lspci/spdk_lspci.o 00:02:37.272 CC examples/nvme/abort/abort.o 00:02:37.272 CC test/bdev/bdevio/bdevio.o 00:02:37.272 LINK spdk_lspci 00:02:37.272 LINK jsoncat 00:02:37.272 LINK histogram_perf 00:02:37.272 CC examples/idxd/perf/perf.o 00:02:37.272 LINK stub 00:02:37.272 LINK vhost_fuzz 00:02:37.530 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:37.530 CC app/spdk_nvme_perf/perf.o 00:02:37.530 CC app/spdk_nvme_identify/identify.o 00:02:37.530 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:37.530 CC app/spdk_nvme_discover/discovery_aer.o 00:02:37.530 LINK abort 00:02:37.530 LINK idxd_perf 00:02:37.530 LINK bdevio 00:02:37.788 LINK pmr_persistence 00:02:37.788 LINK interrupt_tgt 00:02:37.788 LINK spdk_nvme_discover 00:02:37.788 TEST_HEADER include/spdk/accel.h 00:02:37.788 TEST_HEADER include/spdk/accel_module.h 00:02:37.788 TEST_HEADER include/spdk/assert.h 00:02:37.788 TEST_HEADER include/spdk/barrier.h 00:02:37.788 TEST_HEADER include/spdk/base64.h 00:02:37.788 CC app/spdk_top/spdk_top.o 00:02:37.788 TEST_HEADER include/spdk/bdev.h 00:02:37.788 TEST_HEADER include/spdk/bdev_module.h 00:02:37.788 TEST_HEADER include/spdk/bdev_zone.h 00:02:37.788 TEST_HEADER include/spdk/bit_array.h 00:02:38.047 TEST_HEADER include/spdk/bit_pool.h 00:02:38.047 CC test/blobfs/mkfs/mkfs.o 00:02:38.047 TEST_HEADER include/spdk/blob_bdev.h 00:02:38.047 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:38.047 TEST_HEADER include/spdk/blobfs.h 00:02:38.047 TEST_HEADER include/spdk/blob.h 00:02:38.047 TEST_HEADER include/spdk/conf.h 00:02:38.047 TEST_HEADER include/spdk/config.h 00:02:38.047 TEST_HEADER include/spdk/cpuset.h 00:02:38.047 TEST_HEADER include/spdk/crc16.h 00:02:38.047 TEST_HEADER include/spdk/crc32.h 00:02:38.047 TEST_HEADER include/spdk/crc64.h 00:02:38.047 TEST_HEADER include/spdk/dif.h 00:02:38.047 TEST_HEADER include/spdk/dma.h 00:02:38.047 TEST_HEADER include/spdk/endian.h 00:02:38.047 TEST_HEADER include/spdk/env_dpdk.h 00:02:38.047 TEST_HEADER include/spdk/env.h 00:02:38.047 TEST_HEADER include/spdk/event.h 00:02:38.047 TEST_HEADER include/spdk/fd_group.h 00:02:38.047 TEST_HEADER include/spdk/fd.h 00:02:38.047 TEST_HEADER include/spdk/file.h 00:02:38.047 TEST_HEADER include/spdk/ftl.h 00:02:38.047 TEST_HEADER include/spdk/gpt_spec.h 00:02:38.047 TEST_HEADER include/spdk/hexlify.h 00:02:38.047 TEST_HEADER include/spdk/histogram_data.h 00:02:38.047 TEST_HEADER include/spdk/idxd.h 00:02:38.047 TEST_HEADER include/spdk/idxd_spec.h 00:02:38.047 TEST_HEADER include/spdk/init.h 00:02:38.047 TEST_HEADER include/spdk/ioat.h 00:02:38.047 TEST_HEADER include/spdk/ioat_spec.h 00:02:38.047 TEST_HEADER include/spdk/iscsi_spec.h 00:02:38.047 TEST_HEADER include/spdk/json.h 00:02:38.047 TEST_HEADER include/spdk/jsonrpc.h 00:02:38.047 CC test/dma/test_dma/test_dma.o 00:02:38.047 TEST_HEADER include/spdk/likely.h 00:02:38.047 TEST_HEADER include/spdk/log.h 00:02:38.047 TEST_HEADER include/spdk/lvol.h 00:02:38.047 TEST_HEADER include/spdk/memory.h 00:02:38.047 TEST_HEADER include/spdk/mmio.h 00:02:38.047 TEST_HEADER include/spdk/nbd.h 00:02:38.047 TEST_HEADER include/spdk/notify.h 00:02:38.047 TEST_HEADER include/spdk/nvme.h 00:02:38.047 TEST_HEADER include/spdk/nvme_intel.h 00:02:38.047 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:38.047 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:38.047 TEST_HEADER include/spdk/nvme_spec.h 00:02:38.047 TEST_HEADER include/spdk/nvme_zns.h 00:02:38.047 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:38.047 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:38.047 TEST_HEADER include/spdk/nvmf.h 00:02:38.047 TEST_HEADER include/spdk/nvmf_spec.h 00:02:38.047 TEST_HEADER include/spdk/nvmf_transport.h 00:02:38.047 TEST_HEADER include/spdk/opal.h 00:02:38.047 TEST_HEADER include/spdk/opal_spec.h 00:02:38.047 TEST_HEADER include/spdk/pci_ids.h 00:02:38.047 TEST_HEADER include/spdk/pipe.h 00:02:38.047 TEST_HEADER include/spdk/queue.h 00:02:38.047 TEST_HEADER include/spdk/reduce.h 00:02:38.047 TEST_HEADER include/spdk/rpc.h 00:02:38.047 TEST_HEADER include/spdk/scheduler.h 00:02:38.047 TEST_HEADER include/spdk/scsi.h 00:02:38.047 TEST_HEADER include/spdk/scsi_spec.h 00:02:38.047 TEST_HEADER include/spdk/sock.h 00:02:38.047 TEST_HEADER include/spdk/stdinc.h 00:02:38.047 TEST_HEADER include/spdk/string.h 00:02:38.047 TEST_HEADER include/spdk/thread.h 00:02:38.047 TEST_HEADER include/spdk/trace.h 00:02:38.047 CC app/vhost/vhost.o 00:02:38.047 TEST_HEADER include/spdk/trace_parser.h 00:02:38.047 TEST_HEADER include/spdk/tree.h 00:02:38.047 TEST_HEADER include/spdk/ublk.h 00:02:38.047 TEST_HEADER include/spdk/util.h 00:02:38.047 TEST_HEADER include/spdk/uuid.h 00:02:38.047 TEST_HEADER include/spdk/version.h 00:02:38.047 CC test/env/mem_callbacks/mem_callbacks.o 00:02:38.047 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:38.047 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:38.047 TEST_HEADER include/spdk/vhost.h 00:02:38.047 TEST_HEADER include/spdk/vmd.h 00:02:38.047 TEST_HEADER include/spdk/xor.h 00:02:38.047 TEST_HEADER include/spdk/zipf.h 00:02:38.047 CXX test/cpp_headers/accel.o 00:02:38.047 LINK mkfs 00:02:38.047 LINK iscsi_fuzz 00:02:38.306 LINK vhost 00:02:38.306 LINK spdk_nvme_perf 00:02:38.306 CXX test/cpp_headers/accel_module.o 00:02:38.306 LINK spdk_nvme_identify 00:02:38.306 CXX test/cpp_headers/assert.o 00:02:38.565 LINK test_dma 00:02:38.565 CC test/event/event_perf/event_perf.o 00:02:38.565 CXX test/cpp_headers/barrier.o 00:02:38.565 CC test/env/vtophys/vtophys.o 00:02:38.565 CC test/env/memory/memory_ut.o 00:02:38.565 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:38.823 CC test/lvol/esnap/esnap.o 00:02:38.823 LINK vtophys 00:02:38.823 CXX test/cpp_headers/base64.o 00:02:38.823 LINK event_perf 00:02:38.823 LINK env_dpdk_post_init 00:02:38.823 LINK spdk_top 00:02:38.823 LINK mem_callbacks 00:02:38.823 CC test/nvme/aer/aer.o 00:02:38.823 CXX test/cpp_headers/bdev.o 00:02:39.082 CC test/rpc_client/rpc_client_test.o 00:02:39.082 CXX test/cpp_headers/bdev_module.o 00:02:39.082 CXX test/cpp_headers/bdev_zone.o 00:02:39.082 CC test/event/reactor/reactor.o 00:02:39.082 CC app/spdk_dd/spdk_dd.o 00:02:39.082 LINK rpc_client_test 00:02:39.082 LINK reactor 00:02:39.341 CXX test/cpp_headers/bit_array.o 00:02:39.341 LINK aer 00:02:39.341 CC test/env/pci/pci_ut.o 00:02:39.341 CC app/fio/nvme/fio_plugin.o 00:02:39.341 CXX test/cpp_headers/bit_pool.o 00:02:39.341 CC test/event/reactor_perf/reactor_perf.o 00:02:39.341 CC test/thread/poller_perf/poller_perf.o 00:02:39.341 CC test/nvme/reset/reset.o 00:02:39.600 LINK spdk_dd 00:02:39.600 CXX test/cpp_headers/blob_bdev.o 00:02:39.600 LINK reactor_perf 00:02:39.600 LINK poller_perf 00:02:39.600 LINK memory_ut 00:02:39.600 LINK pci_ut 00:02:39.859 LINK reset 00:02:39.859 CXX test/cpp_headers/blobfs_bdev.o 00:02:39.859 CXX test/cpp_headers/blobfs.o 00:02:39.859 CXX test/cpp_headers/blob.o 00:02:39.859 LINK spdk_nvme 00:02:39.859 CC test/event/app_repeat/app_repeat.o 00:02:39.859 CXX test/cpp_headers/conf.o 00:02:40.117 CC test/event/scheduler/scheduler.o 00:02:40.117 CC test/nvme/sgl/sgl.o 00:02:40.117 CXX test/cpp_headers/config.o 00:02:40.117 LINK app_repeat 00:02:40.117 CC app/fio/bdev/fio_plugin.o 00:02:40.117 CXX test/cpp_headers/cpuset.o 00:02:40.117 CC test/nvme/e2edp/nvme_dp.o 00:02:40.117 CC test/nvme/overhead/overhead.o 00:02:40.376 CC test/nvme/err_injection/err_injection.o 00:02:40.376 CXX test/cpp_headers/crc16.o 00:02:40.376 LINK scheduler 00:02:40.376 CC test/nvme/startup/startup.o 00:02:40.376 LINK sgl 00:02:40.376 LINK nvme_dp 00:02:40.634 CXX test/cpp_headers/crc32.o 00:02:40.634 LINK err_injection 00:02:40.634 LINK overhead 00:02:40.634 LINK spdk_bdev 00:02:40.634 LINK startup 00:02:40.634 CXX test/cpp_headers/crc64.o 00:02:40.634 CC test/nvme/reserve/reserve.o 00:02:40.634 CC test/nvme/simple_copy/simple_copy.o 00:02:40.893 CC test/nvme/connect_stress/connect_stress.o 00:02:40.893 CC test/nvme/boot_partition/boot_partition.o 00:02:40.893 CC test/nvme/compliance/nvme_compliance.o 00:02:40.893 CC test/nvme/fused_ordering/fused_ordering.o 00:02:40.893 CXX test/cpp_headers/dif.o 00:02:40.893 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:40.893 LINK reserve 00:02:40.893 LINK simple_copy 00:02:40.893 LINK boot_partition 00:02:40.893 LINK connect_stress 00:02:40.893 CXX test/cpp_headers/dma.o 00:02:41.152 LINK fused_ordering 00:02:41.152 LINK doorbell_aers 00:02:41.152 CC test/nvme/fdp/fdp.o 00:02:41.152 CC test/nvme/cuse/cuse.o 00:02:41.152 LINK nvme_compliance 00:02:41.152 CXX test/cpp_headers/endian.o 00:02:41.411 CXX test/cpp_headers/env_dpdk.o 00:02:41.411 CXX test/cpp_headers/env.o 00:02:41.411 CXX test/cpp_headers/event.o 00:02:41.411 CXX test/cpp_headers/fd_group.o 00:02:41.411 CXX test/cpp_headers/fd.o 00:02:41.702 CXX test/cpp_headers/file.o 00:02:41.702 CXX test/cpp_headers/ftl.o 00:02:41.702 CXX test/cpp_headers/gpt_spec.o 00:02:41.702 CXX test/cpp_headers/hexlify.o 00:02:41.702 CXX test/cpp_headers/histogram_data.o 00:02:41.702 LINK fdp 00:02:41.702 CXX test/cpp_headers/idxd.o 00:02:41.702 CXX test/cpp_headers/idxd_spec.o 00:02:41.702 CXX test/cpp_headers/init.o 00:02:41.961 CXX test/cpp_headers/ioat.o 00:02:41.961 CXX test/cpp_headers/ioat_spec.o 00:02:41.961 CXX test/cpp_headers/iscsi_spec.o 00:02:41.961 CXX test/cpp_headers/json.o 00:02:41.961 CXX test/cpp_headers/jsonrpc.o 00:02:41.961 CXX test/cpp_headers/likely.o 00:02:42.220 CXX test/cpp_headers/log.o 00:02:42.220 CXX test/cpp_headers/lvol.o 00:02:42.220 CXX test/cpp_headers/memory.o 00:02:42.220 CXX test/cpp_headers/mmio.o 00:02:42.220 CXX test/cpp_headers/nbd.o 00:02:42.220 CXX test/cpp_headers/notify.o 00:02:42.220 CXX test/cpp_headers/nvme.o 00:02:42.220 CXX test/cpp_headers/nvme_intel.o 00:02:42.220 CXX test/cpp_headers/nvme_ocssd.o 00:02:42.220 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:42.478 CXX test/cpp_headers/nvme_spec.o 00:02:42.478 CXX test/cpp_headers/nvme_zns.o 00:02:42.478 CXX test/cpp_headers/nvmf_cmd.o 00:02:42.478 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:42.478 CXX test/cpp_headers/nvmf.o 00:02:42.478 CXX test/cpp_headers/nvmf_spec.o 00:02:42.478 CXX test/cpp_headers/nvmf_transport.o 00:02:42.478 LINK cuse 00:02:42.478 CXX test/cpp_headers/opal.o 00:02:42.478 CXX test/cpp_headers/opal_spec.o 00:02:42.737 CXX test/cpp_headers/pci_ids.o 00:02:42.737 CXX test/cpp_headers/pipe.o 00:02:42.737 CXX test/cpp_headers/queue.o 00:02:42.737 CXX test/cpp_headers/reduce.o 00:02:42.737 CXX test/cpp_headers/rpc.o 00:02:42.737 CXX test/cpp_headers/scheduler.o 00:02:42.737 CXX test/cpp_headers/scsi.o 00:02:42.737 CXX test/cpp_headers/scsi_spec.o 00:02:42.737 CXX test/cpp_headers/sock.o 00:02:42.737 CXX test/cpp_headers/stdinc.o 00:02:42.996 CXX test/cpp_headers/string.o 00:02:42.996 CXX test/cpp_headers/thread.o 00:02:42.996 CXX test/cpp_headers/trace.o 00:02:42.996 CXX test/cpp_headers/trace_parser.o 00:02:42.996 CXX test/cpp_headers/tree.o 00:02:42.996 CXX test/cpp_headers/ublk.o 00:02:42.996 CXX test/cpp_headers/util.o 00:02:42.996 CXX test/cpp_headers/uuid.o 00:02:42.996 CXX test/cpp_headers/version.o 00:02:42.996 CXX test/cpp_headers/vfio_user_pci.o 00:02:42.996 CXX test/cpp_headers/vfio_user_spec.o 00:02:42.996 CXX test/cpp_headers/vhost.o 00:02:43.255 CXX test/cpp_headers/vmd.o 00:02:43.255 CXX test/cpp_headers/xor.o 00:02:43.255 CXX test/cpp_headers/zipf.o 00:02:43.823 LINK esnap 00:02:44.391 00:02:44.391 real 1m1.889s 00:02:44.391 user 6m35.749s 00:02:44.391 sys 1m37.682s 00:02:44.391 ************************************ 00:02:44.391 END TEST make 00:02:44.391 19:03:21 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:44.391 19:03:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:44.391 ************************************ 00:02:44.649 19:03:21 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:44.649 19:03:21 -- nvmf/common.sh@7 -- # uname -s 00:02:44.649 19:03:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:44.649 19:03:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:44.649 19:03:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:44.649 19:03:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:44.649 19:03:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:44.649 19:03:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:44.649 19:03:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:44.649 19:03:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:44.649 19:03:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:44.649 19:03:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:44.649 19:03:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:02:44.649 19:03:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:02:44.649 19:03:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:44.649 19:03:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:44.649 19:03:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:02:44.649 19:03:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:44.649 19:03:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:44.649 19:03:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:44.649 19:03:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:44.649 19:03:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:44.649 19:03:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:44.649 19:03:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:44.649 19:03:21 -- paths/export.sh@5 -- # export PATH 00:02:44.649 19:03:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:44.649 19:03:21 -- nvmf/common.sh@46 -- # : 0 00:02:44.649 19:03:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:44.649 19:03:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:44.649 19:03:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:44.649 19:03:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:44.649 19:03:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:44.649 19:03:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:44.649 19:03:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:44.649 19:03:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:44.649 19:03:21 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:44.649 19:03:21 -- spdk/autotest.sh@32 -- # uname -s 00:02:44.649 19:03:21 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:44.649 19:03:21 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:44.649 19:03:21 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:44.649 19:03:21 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:02:44.649 19:03:21 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:44.649 19:03:21 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:44.649 19:03:21 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:44.649 19:03:21 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:44.649 19:03:21 -- spdk/autotest.sh@48 -- # udevadm_pid=49662 00:02:44.649 19:03:21 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:02:44.649 19:03:21 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:44.649 19:03:21 -- spdk/autotest.sh@54 -- # echo 49665 00:02:44.649 19:03:21 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:02:44.649 19:03:21 -- spdk/autotest.sh@56 -- # echo 49670 00:02:44.649 19:03:21 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:02:44.649 19:03:21 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:02:44.649 19:03:21 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:44.649 19:03:21 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:44.649 19:03:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:44.649 19:03:21 -- common/autotest_common.sh@10 -- # set +x 00:02:44.649 19:03:21 -- spdk/autotest.sh@70 -- # create_test_list 00:02:44.649 19:03:21 -- common/autotest_common.sh@734 -- # xtrace_disable 00:02:44.649 19:03:21 -- common/autotest_common.sh@10 -- # set +x 00:02:44.649 19:03:21 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:02:44.649 19:03:21 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:02:44.649 19:03:21 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:02:44.649 19:03:21 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:02:44.649 19:03:21 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:02:44.649 19:03:21 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:44.649 19:03:21 -- common/autotest_common.sh@1438 -- # uname 00:02:44.649 19:03:21 -- common/autotest_common.sh@1438 -- # '[' Linux = FreeBSD ']' 00:02:44.649 19:03:21 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:44.649 19:03:21 -- common/autotest_common.sh@1458 -- # uname 00:02:44.649 19:03:21 -- common/autotest_common.sh@1458 -- # [[ Linux = FreeBSD ]] 00:02:44.649 19:03:21 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:44.649 19:03:21 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:44.649 19:03:21 -- spdk/autotest.sh@83 -- # hash lcov 00:02:44.649 19:03:21 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:44.649 19:03:21 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:44.649 --rc lcov_branch_coverage=1 00:02:44.649 --rc lcov_function_coverage=1 00:02:44.650 --rc genhtml_branch_coverage=1 00:02:44.650 --rc genhtml_function_coverage=1 00:02:44.650 --rc genhtml_legend=1 00:02:44.650 --rc geninfo_all_blocks=1 00:02:44.650 ' 00:02:44.650 19:03:21 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:44.650 --rc lcov_branch_coverage=1 00:02:44.650 --rc lcov_function_coverage=1 00:02:44.650 --rc genhtml_branch_coverage=1 00:02:44.650 --rc genhtml_function_coverage=1 00:02:44.650 --rc genhtml_legend=1 00:02:44.650 --rc geninfo_all_blocks=1 00:02:44.650 ' 00:02:44.650 19:03:21 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:44.650 --rc lcov_branch_coverage=1 00:02:44.650 --rc lcov_function_coverage=1 00:02:44.650 --rc genhtml_branch_coverage=1 00:02:44.650 --rc genhtml_function_coverage=1 00:02:44.650 --rc genhtml_legend=1 00:02:44.650 --rc geninfo_all_blocks=1 00:02:44.650 --no-external' 00:02:44.650 19:03:21 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:44.650 --rc lcov_branch_coverage=1 00:02:44.650 --rc lcov_function_coverage=1 00:02:44.650 --rc genhtml_branch_coverage=1 00:02:44.650 --rc genhtml_function_coverage=1 00:02:44.650 --rc genhtml_legend=1 00:02:44.650 --rc geninfo_all_blocks=1 00:02:44.650 --no-external' 00:02:44.650 19:03:21 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:44.650 lcov: LCOV version 1.14 00:02:44.650 19:03:22 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:02:52.764 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:52.764 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:52.764 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:52.764 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:52.764 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:52.764 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:10.903 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:10.903 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:10.904 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:10.904 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:13.438 19:03:50 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:03:13.438 19:03:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:13.438 19:03:50 -- common/autotest_common.sh@10 -- # set +x 00:03:13.438 19:03:50 -- spdk/autotest.sh@102 -- # rm -f 00:03:13.438 19:03:50 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:13.697 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:13.697 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:03:13.697 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:03:13.956 19:03:51 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:03:13.956 19:03:51 -- common/autotest_common.sh@1652 -- # zoned_devs=() 00:03:13.956 19:03:51 -- common/autotest_common.sh@1652 -- # local -gA zoned_devs 00:03:13.956 19:03:51 -- common/autotest_common.sh@1653 -- # local nvme bdf 00:03:13.956 19:03:51 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:13.956 19:03:51 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme0n1 00:03:13.956 19:03:51 -- common/autotest_common.sh@1645 -- # local device=nvme0n1 00:03:13.956 19:03:51 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:13.956 19:03:51 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:03:13.956 19:03:51 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:13.956 19:03:51 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n1 00:03:13.956 19:03:51 -- common/autotest_common.sh@1645 -- # local device=nvme1n1 00:03:13.956 19:03:51 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:13.956 19:03:51 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:03:13.956 19:03:51 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:13.956 19:03:51 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n2 00:03:13.956 19:03:51 -- common/autotest_common.sh@1645 -- # local device=nvme1n2 00:03:13.956 19:03:51 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:13.956 19:03:51 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:03:13.956 19:03:51 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:13.956 19:03:51 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n3 00:03:13.957 19:03:51 -- common/autotest_common.sh@1645 -- # local device=nvme1n3 00:03:13.957 19:03:51 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:13.957 19:03:51 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:03:13.957 19:03:51 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:03:13.957 19:03:51 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:03:13.957 19:03:51 -- spdk/autotest.sh@121 -- # grep -v p 00:03:13.957 19:03:51 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:13.957 19:03:51 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:13.957 19:03:51 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:03:13.957 19:03:51 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:13.957 19:03:51 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:13.957 No valid GPT data, bailing 00:03:13.957 19:03:51 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:13.957 19:03:51 -- scripts/common.sh@393 -- # pt= 00:03:13.957 19:03:51 -- scripts/common.sh@394 -- # return 1 00:03:13.957 19:03:51 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:13.957 1+0 records in 00:03:13.957 1+0 records out 00:03:13.957 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00436834 s, 240 MB/s 00:03:13.957 19:03:51 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:13.957 19:03:51 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:13.957 19:03:51 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:03:13.957 19:03:51 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:03:13.957 19:03:51 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:13.957 No valid GPT data, bailing 00:03:13.957 19:03:51 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:13.957 19:03:51 -- scripts/common.sh@393 -- # pt= 00:03:13.957 19:03:51 -- scripts/common.sh@394 -- # return 1 00:03:13.957 19:03:51 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:13.957 1+0 records in 00:03:13.957 1+0 records out 00:03:13.957 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00440288 s, 238 MB/s 00:03:13.957 19:03:51 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:13.957 19:03:51 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:13.957 19:03:51 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n2 00:03:13.957 19:03:51 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:03:13.957 19:03:51 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:13.957 No valid GPT data, bailing 00:03:13.957 19:03:51 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:14.216 19:03:51 -- scripts/common.sh@393 -- # pt= 00:03:14.216 19:03:51 -- scripts/common.sh@394 -- # return 1 00:03:14.216 19:03:51 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:14.216 1+0 records in 00:03:14.216 1+0 records out 00:03:14.216 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00457798 s, 229 MB/s 00:03:14.216 19:03:51 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:14.216 19:03:51 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:14.216 19:03:51 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n3 00:03:14.216 19:03:51 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:03:14.216 19:03:51 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:14.216 No valid GPT data, bailing 00:03:14.216 19:03:51 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:14.216 19:03:51 -- scripts/common.sh@393 -- # pt= 00:03:14.216 19:03:51 -- scripts/common.sh@394 -- # return 1 00:03:14.216 19:03:51 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:14.216 1+0 records in 00:03:14.216 1+0 records out 00:03:14.216 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00383619 s, 273 MB/s 00:03:14.216 19:03:51 -- spdk/autotest.sh@129 -- # sync 00:03:14.216 19:03:51 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:14.216 19:03:51 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:14.216 19:03:51 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:16.121 19:03:53 -- spdk/autotest.sh@135 -- # uname -s 00:03:16.121 19:03:53 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:16.121 19:03:53 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:16.121 19:03:53 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:16.121 19:03:53 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:16.121 19:03:53 -- common/autotest_common.sh@10 -- # set +x 00:03:16.121 ************************************ 00:03:16.121 START TEST setup.sh 00:03:16.121 ************************************ 00:03:16.121 19:03:53 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:16.121 * Looking for test storage... 00:03:16.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:16.121 19:03:53 -- setup/test-setup.sh@10 -- # uname -s 00:03:16.121 19:03:53 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:16.121 19:03:53 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:16.121 19:03:53 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:16.121 19:03:53 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:16.121 19:03:53 -- common/autotest_common.sh@10 -- # set +x 00:03:16.121 ************************************ 00:03:16.121 START TEST acl 00:03:16.121 ************************************ 00:03:16.121 19:03:53 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:16.121 * Looking for test storage... 00:03:16.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:16.121 19:03:53 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:16.121 19:03:53 -- common/autotest_common.sh@1652 -- # zoned_devs=() 00:03:16.121 19:03:53 -- common/autotest_common.sh@1652 -- # local -gA zoned_devs 00:03:16.121 19:03:53 -- common/autotest_common.sh@1653 -- # local nvme bdf 00:03:16.122 19:03:53 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:16.122 19:03:53 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme0n1 00:03:16.122 19:03:53 -- common/autotest_common.sh@1645 -- # local device=nvme0n1 00:03:16.122 19:03:53 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:16.122 19:03:53 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:03:16.122 19:03:53 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:16.122 19:03:53 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n1 00:03:16.122 19:03:53 -- common/autotest_common.sh@1645 -- # local device=nvme1n1 00:03:16.122 19:03:53 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:16.122 19:03:53 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:03:16.122 19:03:53 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:16.122 19:03:53 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n2 00:03:16.122 19:03:53 -- common/autotest_common.sh@1645 -- # local device=nvme1n2 00:03:16.122 19:03:53 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:16.122 19:03:53 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:03:16.122 19:03:53 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:16.122 19:03:53 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n3 00:03:16.122 19:03:53 -- common/autotest_common.sh@1645 -- # local device=nvme1n3 00:03:16.122 19:03:53 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:16.122 19:03:53 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:03:16.122 19:03:53 -- setup/acl.sh@12 -- # devs=() 00:03:16.122 19:03:53 -- setup/acl.sh@12 -- # declare -a devs 00:03:16.122 19:03:53 -- setup/acl.sh@13 -- # drivers=() 00:03:16.122 19:03:53 -- setup/acl.sh@13 -- # declare -A drivers 00:03:16.122 19:03:53 -- setup/acl.sh@51 -- # setup reset 00:03:16.122 19:03:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:16.122 19:03:53 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:17.058 19:03:54 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:17.058 19:03:54 -- setup/acl.sh@16 -- # local dev driver 00:03:17.058 19:03:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.058 19:03:54 -- setup/acl.sh@15 -- # setup output status 00:03:17.058 19:03:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.058 19:03:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:17.058 Hugepages 00:03:17.058 node hugesize free / total 00:03:17.058 19:03:54 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:17.058 19:03:54 -- setup/acl.sh@19 -- # continue 00:03:17.058 19:03:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.058 00:03:17.058 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:17.058 19:03:54 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:17.058 19:03:54 -- setup/acl.sh@19 -- # continue 00:03:17.058 19:03:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.058 19:03:54 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:17.058 19:03:54 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:17.058 19:03:54 -- setup/acl.sh@20 -- # continue 00:03:17.058 19:03:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.317 19:03:54 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:03:17.317 19:03:54 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:17.317 19:03:54 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:17.317 19:03:54 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:17.317 19:03:54 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:17.317 19:03:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.317 19:03:54 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:03:17.317 19:03:54 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:17.317 19:03:54 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:17.317 19:03:54 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:17.317 19:03:54 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:17.317 19:03:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.317 19:03:54 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:17.317 19:03:54 -- setup/acl.sh@54 -- # run_test denied denied 00:03:17.317 19:03:54 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:17.317 19:03:54 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:17.317 19:03:54 -- common/autotest_common.sh@10 -- # set +x 00:03:17.317 ************************************ 00:03:17.317 START TEST denied 00:03:17.317 ************************************ 00:03:17.317 19:03:54 -- common/autotest_common.sh@1102 -- # denied 00:03:17.317 19:03:54 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:03:17.317 19:03:54 -- setup/acl.sh@38 -- # setup output config 00:03:17.317 19:03:54 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:03:17.317 19:03:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.317 19:03:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:18.255 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:03:18.255 19:03:55 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:03:18.255 19:03:55 -- setup/acl.sh@28 -- # local dev driver 00:03:18.255 19:03:55 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:18.255 19:03:55 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:03:18.255 19:03:55 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:03:18.255 19:03:55 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:18.255 19:03:55 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:18.255 19:03:55 -- setup/acl.sh@41 -- # setup reset 00:03:18.255 19:03:55 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:18.255 19:03:55 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:18.823 00:03:18.823 real 0m1.493s 00:03:18.823 user 0m0.595s 00:03:18.823 sys 0m0.851s 00:03:18.823 ************************************ 00:03:18.823 END TEST denied 00:03:18.823 ************************************ 00:03:18.823 19:03:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:18.823 19:03:56 -- common/autotest_common.sh@10 -- # set +x 00:03:18.823 19:03:56 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:18.823 19:03:56 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:18.823 19:03:56 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:18.823 19:03:56 -- common/autotest_common.sh@10 -- # set +x 00:03:18.823 ************************************ 00:03:18.823 START TEST allowed 00:03:18.823 ************************************ 00:03:18.823 19:03:56 -- common/autotest_common.sh@1102 -- # allowed 00:03:18.823 19:03:56 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:03:18.823 19:03:56 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:03:18.823 19:03:56 -- setup/acl.sh@45 -- # setup output config 00:03:18.823 19:03:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.823 19:03:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:19.759 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:19.759 19:03:56 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:03:19.759 19:03:56 -- setup/acl.sh@28 -- # local dev driver 00:03:19.759 19:03:56 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:19.759 19:03:56 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:03:19.759 19:03:56 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:03:19.759 19:03:56 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:19.759 19:03:56 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:19.759 19:03:56 -- setup/acl.sh@48 -- # setup reset 00:03:19.759 19:03:56 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:19.759 19:03:56 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:20.337 00:03:20.337 real 0m1.578s 00:03:20.337 user 0m0.688s 00:03:20.337 sys 0m0.898s 00:03:20.337 19:03:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:20.337 19:03:57 -- common/autotest_common.sh@10 -- # set +x 00:03:20.337 ************************************ 00:03:20.337 END TEST allowed 00:03:20.337 ************************************ 00:03:20.597 00:03:20.597 real 0m4.401s 00:03:20.597 user 0m1.867s 00:03:20.597 sys 0m2.518s 00:03:20.597 19:03:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:20.597 ************************************ 00:03:20.597 END TEST acl 00:03:20.597 19:03:57 -- common/autotest_common.sh@10 -- # set +x 00:03:20.597 ************************************ 00:03:20.597 19:03:57 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:20.597 19:03:57 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:20.597 19:03:57 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:20.597 19:03:57 -- common/autotest_common.sh@10 -- # set +x 00:03:20.597 ************************************ 00:03:20.597 START TEST hugepages 00:03:20.597 ************************************ 00:03:20.597 19:03:57 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:20.597 * Looking for test storage... 00:03:20.597 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:20.597 19:03:57 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:20.597 19:03:57 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:20.597 19:03:57 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:20.597 19:03:57 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:20.597 19:03:57 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:20.597 19:03:57 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:20.597 19:03:57 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:20.597 19:03:57 -- setup/common.sh@18 -- # local node= 00:03:20.597 19:03:57 -- setup/common.sh@19 -- # local var val 00:03:20.597 19:03:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:20.597 19:03:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.597 19:03:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.597 19:03:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.597 19:03:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.597 19:03:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 19:03:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 5471128 kB' 'MemAvailable: 7393608 kB' 'Buffers: 2436 kB' 'Cached: 2132228 kB' 'SwapCached: 0 kB' 'Active: 872964 kB' 'Inactive: 1364992 kB' 'Active(anon): 113780 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1364992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 104848 kB' 'Mapped: 48596 kB' 'Shmem: 10488 kB' 'KReclaimable: 70488 kB' 'Slab: 144172 kB' 'SReclaimable: 70488 kB' 'SUnreclaim: 73684 kB' 'KernelStack: 6468 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 335728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.597 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.597 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # continue 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.598 19:03:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.598 19:03:57 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.598 19:03:57 -- setup/common.sh@33 -- # echo 2048 00:03:20.598 19:03:57 -- setup/common.sh@33 -- # return 0 00:03:20.598 19:03:57 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:20.598 19:03:57 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:20.598 19:03:57 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:20.598 19:03:57 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:20.598 19:03:57 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:20.598 19:03:57 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:20.598 19:03:57 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:20.598 19:03:57 -- setup/hugepages.sh@207 -- # get_nodes 00:03:20.598 19:03:57 -- setup/hugepages.sh@27 -- # local node 00:03:20.598 19:03:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.598 19:03:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:20.598 19:03:57 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:20.598 19:03:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:20.598 19:03:57 -- setup/hugepages.sh@208 -- # clear_hp 00:03:20.598 19:03:57 -- setup/hugepages.sh@37 -- # local node hp 00:03:20.598 19:03:57 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:20.598 19:03:57 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:20.598 19:03:57 -- setup/hugepages.sh@41 -- # echo 0 00:03:20.598 19:03:57 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:20.598 19:03:57 -- setup/hugepages.sh@41 -- # echo 0 00:03:20.598 19:03:57 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:20.598 19:03:57 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:20.598 19:03:57 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:20.598 19:03:57 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:20.598 19:03:57 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:20.598 19:03:57 -- common/autotest_common.sh@10 -- # set +x 00:03:20.598 ************************************ 00:03:20.598 START TEST default_setup 00:03:20.598 ************************************ 00:03:20.598 19:03:57 -- common/autotest_common.sh@1102 -- # default_setup 00:03:20.598 19:03:57 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:20.598 19:03:57 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:20.598 19:03:57 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:20.598 19:03:57 -- setup/hugepages.sh@51 -- # shift 00:03:20.598 19:03:57 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:20.598 19:03:57 -- setup/hugepages.sh@52 -- # local node_ids 00:03:20.598 19:03:57 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.598 19:03:57 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:20.598 19:03:57 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:20.598 19:03:57 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:20.598 19:03:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.598 19:03:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:20.598 19:03:57 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:20.598 19:03:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.598 19:03:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.598 19:03:57 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:20.598 19:03:57 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:20.598 19:03:57 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:20.598 19:03:57 -- setup/hugepages.sh@73 -- # return 0 00:03:20.598 19:03:57 -- setup/hugepages.sh@137 -- # setup output 00:03:20.598 19:03:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.598 19:03:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:21.536 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:21.536 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:21.536 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:03:21.536 19:03:58 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:21.536 19:03:58 -- setup/hugepages.sh@89 -- # local node 00:03:21.536 19:03:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:21.536 19:03:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:21.536 19:03:58 -- setup/hugepages.sh@92 -- # local surp 00:03:21.536 19:03:58 -- setup/hugepages.sh@93 -- # local resv 00:03:21.536 19:03:58 -- setup/hugepages.sh@94 -- # local anon 00:03:21.536 19:03:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:21.536 19:03:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:21.536 19:03:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:21.536 19:03:58 -- setup/common.sh@18 -- # local node= 00:03:21.536 19:03:58 -- setup/common.sh@19 -- # local var val 00:03:21.536 19:03:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.536 19:03:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.536 19:03:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.536 19:03:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.536 19:03:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.536 19:03:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.536 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.536 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.536 19:03:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7471940 kB' 'MemAvailable: 9394252 kB' 'Buffers: 2436 kB' 'Cached: 2132216 kB' 'SwapCached: 0 kB' 'Active: 889144 kB' 'Inactive: 1364996 kB' 'Active(anon): 129960 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1364996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120892 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 70144 kB' 'Slab: 143840 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73696 kB' 'KernelStack: 6448 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:03:21.536 19:03:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.536 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.536 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.536 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.536 19:03:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.536 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.536 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.536 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.536 19:03:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.536 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.536 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.536 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.536 19:03:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.536 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.536 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.536 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.536 19:03:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.536 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.536 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.536 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.536 19:03:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.536 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.536 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.536 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.536 19:03:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.536 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.536 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.537 19:03:58 -- setup/common.sh@33 -- # echo 0 00:03:21.537 19:03:58 -- setup/common.sh@33 -- # return 0 00:03:21.537 19:03:58 -- setup/hugepages.sh@97 -- # anon=0 00:03:21.537 19:03:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:21.537 19:03:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.537 19:03:58 -- setup/common.sh@18 -- # local node= 00:03:21.537 19:03:58 -- setup/common.sh@19 -- # local var val 00:03:21.537 19:03:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.537 19:03:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.537 19:03:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.537 19:03:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.537 19:03:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.537 19:03:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7471940 kB' 'MemAvailable: 9394252 kB' 'Buffers: 2436 kB' 'Cached: 2132216 kB' 'SwapCached: 0 kB' 'Active: 888652 kB' 'Inactive: 1364996 kB' 'Active(anon): 129468 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1364996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120668 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 70144 kB' 'Slab: 143832 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73688 kB' 'KernelStack: 6416 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.537 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.537 19:03:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.538 19:03:58 -- setup/common.sh@33 -- # echo 0 00:03:21.538 19:03:58 -- setup/common.sh@33 -- # return 0 00:03:21.538 19:03:58 -- setup/hugepages.sh@99 -- # surp=0 00:03:21.538 19:03:58 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:21.538 19:03:58 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:21.538 19:03:58 -- setup/common.sh@18 -- # local node= 00:03:21.538 19:03:58 -- setup/common.sh@19 -- # local var val 00:03:21.538 19:03:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.538 19:03:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.538 19:03:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.538 19:03:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.538 19:03:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.538 19:03:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7471940 kB' 'MemAvailable: 9394252 kB' 'Buffers: 2436 kB' 'Cached: 2132216 kB' 'SwapCached: 0 kB' 'Active: 888624 kB' 'Inactive: 1364996 kB' 'Active(anon): 129440 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1364996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120632 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 70144 kB' 'Slab: 143832 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73688 kB' 'KernelStack: 6416 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.538 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.538 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.539 19:03:58 -- setup/common.sh@33 -- # echo 0 00:03:21.539 19:03:58 -- setup/common.sh@33 -- # return 0 00:03:21.539 19:03:58 -- setup/hugepages.sh@100 -- # resv=0 00:03:21.539 nr_hugepages=1024 00:03:21.539 19:03:58 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:21.539 resv_hugepages=0 00:03:21.539 19:03:58 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:21.539 surplus_hugepages=0 00:03:21.539 19:03:58 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:21.539 anon_hugepages=0 00:03:21.539 19:03:58 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:21.539 19:03:58 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.539 19:03:58 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:21.539 19:03:58 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:21.539 19:03:58 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:21.539 19:03:58 -- setup/common.sh@18 -- # local node= 00:03:21.539 19:03:58 -- setup/common.sh@19 -- # local var val 00:03:21.539 19:03:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.539 19:03:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.539 19:03:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.539 19:03:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.539 19:03:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.539 19:03:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.539 19:03:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7471940 kB' 'MemAvailable: 9394252 kB' 'Buffers: 2436 kB' 'Cached: 2132216 kB' 'SwapCached: 0 kB' 'Active: 888600 kB' 'Inactive: 1364996 kB' 'Active(anon): 129416 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1364996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120608 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 70144 kB' 'Slab: 143832 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73688 kB' 'KernelStack: 6416 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.539 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.539 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.799 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.800 19:03:58 -- setup/common.sh@33 -- # echo 1024 00:03:21.800 19:03:58 -- setup/common.sh@33 -- # return 0 00:03:21.800 19:03:58 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.800 19:03:58 -- setup/hugepages.sh@112 -- # get_nodes 00:03:21.800 19:03:58 -- setup/hugepages.sh@27 -- # local node 00:03:21.800 19:03:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.800 19:03:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:21.800 19:03:58 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:21.800 19:03:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.800 19:03:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.800 19:03:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.800 19:03:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:21.800 19:03:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.800 19:03:58 -- setup/common.sh@18 -- # local node=0 00:03:21.800 19:03:58 -- setup/common.sh@19 -- # local var val 00:03:21.800 19:03:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.800 19:03:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.800 19:03:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:21.800 19:03:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:21.800 19:03:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.800 19:03:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7481636 kB' 'MemUsed: 4760340 kB' 'SwapCached: 0 kB' 'Active: 888688 kB' 'Inactive: 1364996 kB' 'Active(anon): 129504 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1364996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 2134652 kB' 'Mapped: 48604 kB' 'AnonPages: 120692 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70144 kB' 'Slab: 143832 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73688 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # continue 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 19:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 19:03:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 19:03:58 -- setup/common.sh@33 -- # echo 0 00:03:21.800 19:03:58 -- setup/common.sh@33 -- # return 0 00:03:21.800 19:03:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.800 19:03:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.800 19:03:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.800 19:03:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.800 node0=1024 expecting 1024 00:03:21.800 19:03:58 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:21.800 19:03:58 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:21.800 00:03:21.800 real 0m1.063s 00:03:21.800 user 0m0.501s 00:03:21.800 sys 0m0.523s 00:03:21.800 19:03:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:21.800 19:03:58 -- common/autotest_common.sh@10 -- # set +x 00:03:21.800 ************************************ 00:03:21.800 END TEST default_setup 00:03:21.800 ************************************ 00:03:21.800 19:03:59 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:21.800 19:03:59 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:21.800 19:03:59 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:21.800 19:03:59 -- common/autotest_common.sh@10 -- # set +x 00:03:21.800 ************************************ 00:03:21.800 START TEST per_node_1G_alloc 00:03:21.800 ************************************ 00:03:21.800 19:03:59 -- common/autotest_common.sh@1102 -- # per_node_1G_alloc 00:03:21.800 19:03:59 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:21.800 19:03:59 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:21.800 19:03:59 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:21.800 19:03:59 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:21.800 19:03:59 -- setup/hugepages.sh@51 -- # shift 00:03:21.800 19:03:59 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:21.800 19:03:59 -- setup/hugepages.sh@52 -- # local node_ids 00:03:21.800 19:03:59 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:21.800 19:03:59 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:21.800 19:03:59 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:21.800 19:03:59 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:21.800 19:03:59 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.800 19:03:59 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:21.800 19:03:59 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:21.800 19:03:59 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.800 19:03:59 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.800 19:03:59 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:21.800 19:03:59 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:21.800 19:03:59 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:21.800 19:03:59 -- setup/hugepages.sh@73 -- # return 0 00:03:21.800 19:03:59 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:21.800 19:03:59 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:21.800 19:03:59 -- setup/hugepages.sh@146 -- # setup output 00:03:21.800 19:03:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.800 19:03:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:22.060 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:22.060 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:22.060 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:22.060 19:03:59 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:22.060 19:03:59 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:22.060 19:03:59 -- setup/hugepages.sh@89 -- # local node 00:03:22.060 19:03:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.060 19:03:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.060 19:03:59 -- setup/hugepages.sh@92 -- # local surp 00:03:22.060 19:03:59 -- setup/hugepages.sh@93 -- # local resv 00:03:22.060 19:03:59 -- setup/hugepages.sh@94 -- # local anon 00:03:22.060 19:03:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.060 19:03:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.060 19:03:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.060 19:03:59 -- setup/common.sh@18 -- # local node= 00:03:22.060 19:03:59 -- setup/common.sh@19 -- # local var val 00:03:22.060 19:03:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.060 19:03:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.060 19:03:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.060 19:03:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.060 19:03:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.060 19:03:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.060 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.060 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.060 19:03:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8534992 kB' 'MemAvailable: 10457316 kB' 'Buffers: 2436 kB' 'Cached: 2132220 kB' 'SwapCached: 0 kB' 'Active: 889112 kB' 'Inactive: 1365008 kB' 'Active(anon): 129928 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1365008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120808 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 70144 kB' 'Slab: 143808 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73664 kB' 'KernelStack: 6392 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:03:22.060 19:03:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.060 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.060 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.060 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.060 19:03:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.060 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.060 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.060 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.060 19:03:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.060 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.060 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.060 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.060 19:03:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.060 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.060 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.060 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.060 19:03:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.060 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.060 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.060 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.060 19:03:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.060 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.060 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.060 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.061 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.061 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.324 19:03:59 -- setup/common.sh@33 -- # echo 0 00:03:22.324 19:03:59 -- setup/common.sh@33 -- # return 0 00:03:22.324 19:03:59 -- setup/hugepages.sh@97 -- # anon=0 00:03:22.324 19:03:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.324 19:03:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.324 19:03:59 -- setup/common.sh@18 -- # local node= 00:03:22.324 19:03:59 -- setup/common.sh@19 -- # local var val 00:03:22.324 19:03:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.324 19:03:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.324 19:03:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.324 19:03:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.324 19:03:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.324 19:03:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.324 19:03:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8534992 kB' 'MemAvailable: 10457316 kB' 'Buffers: 2436 kB' 'Cached: 2132220 kB' 'SwapCached: 0 kB' 'Active: 888820 kB' 'Inactive: 1365008 kB' 'Active(anon): 129636 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1365008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120764 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 70144 kB' 'Slab: 143844 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73700 kB' 'KernelStack: 6400 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.324 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.324 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.325 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.325 19:03:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.325 19:03:59 -- setup/common.sh@33 -- # echo 0 00:03:22.325 19:03:59 -- setup/common.sh@33 -- # return 0 00:03:22.326 19:03:59 -- setup/hugepages.sh@99 -- # surp=0 00:03:22.326 19:03:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.326 19:03:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.326 19:03:59 -- setup/common.sh@18 -- # local node= 00:03:22.326 19:03:59 -- setup/common.sh@19 -- # local var val 00:03:22.326 19:03:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.326 19:03:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.326 19:03:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.326 19:03:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.326 19:03:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.326 19:03:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8534992 kB' 'MemAvailable: 10457316 kB' 'Buffers: 2436 kB' 'Cached: 2132220 kB' 'SwapCached: 0 kB' 'Active: 888824 kB' 'Inactive: 1365008 kB' 'Active(anon): 129640 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1365008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120804 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 70144 kB' 'Slab: 143844 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73700 kB' 'KernelStack: 6416 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.326 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.326 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.327 19:03:59 -- setup/common.sh@33 -- # echo 0 00:03:22.327 19:03:59 -- setup/common.sh@33 -- # return 0 00:03:22.327 19:03:59 -- setup/hugepages.sh@100 -- # resv=0 00:03:22.327 nr_hugepages=512 00:03:22.327 19:03:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:22.327 resv_hugepages=0 00:03:22.327 19:03:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.327 surplus_hugepages=0 00:03:22.327 19:03:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.327 anon_hugepages=0 00:03:22.327 19:03:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.327 19:03:59 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:22.327 19:03:59 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:22.327 19:03:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.327 19:03:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.327 19:03:59 -- setup/common.sh@18 -- # local node= 00:03:22.327 19:03:59 -- setup/common.sh@19 -- # local var val 00:03:22.327 19:03:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.327 19:03:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.327 19:03:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.327 19:03:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.327 19:03:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.327 19:03:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8534992 kB' 'MemAvailable: 10457316 kB' 'Buffers: 2436 kB' 'Cached: 2132220 kB' 'SwapCached: 0 kB' 'Active: 888836 kB' 'Inactive: 1365008 kB' 'Active(anon): 129652 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1365008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120804 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 70144 kB' 'Slab: 143844 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73700 kB' 'KernelStack: 6416 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.327 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.327 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.328 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.328 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.329 19:03:59 -- setup/common.sh@33 -- # echo 512 00:03:22.329 19:03:59 -- setup/common.sh@33 -- # return 0 00:03:22.329 19:03:59 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:22.329 19:03:59 -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.329 19:03:59 -- setup/hugepages.sh@27 -- # local node 00:03:22.329 19:03:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.329 19:03:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:22.329 19:03:59 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:22.329 19:03:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.329 19:03:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.329 19:03:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.329 19:03:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.329 19:03:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.329 19:03:59 -- setup/common.sh@18 -- # local node=0 00:03:22.329 19:03:59 -- setup/common.sh@19 -- # local var val 00:03:22.329 19:03:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.329 19:03:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.329 19:03:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.329 19:03:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.329 19:03:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.329 19:03:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8534744 kB' 'MemUsed: 3707232 kB' 'SwapCached: 0 kB' 'Active: 888864 kB' 'Inactive: 1365008 kB' 'Active(anon): 129680 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1365008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 2134656 kB' 'Mapped: 48604 kB' 'AnonPages: 120800 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70144 kB' 'Slab: 143844 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73700 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.329 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.329 19:03:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.330 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 19:03:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.330 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 19:03:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.330 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 19:03:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.330 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 19:03:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.330 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 19:03:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.330 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 19:03:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.330 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 19:03:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 19:03:59 -- setup/common.sh@32 -- # continue 00:03:22.330 19:03:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.330 19:03:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.330 19:03:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.330 19:03:59 -- setup/common.sh@33 -- # echo 0 00:03:22.330 19:03:59 -- setup/common.sh@33 -- # return 0 00:03:22.330 19:03:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.330 19:03:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.330 19:03:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.330 19:03:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.330 node0=512 expecting 512 00:03:22.330 19:03:59 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:22.330 19:03:59 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:22.330 00:03:22.330 real 0m0.555s 00:03:22.330 user 0m0.271s 00:03:22.330 sys 0m0.322s 00:03:22.330 19:03:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:22.330 19:03:59 -- common/autotest_common.sh@10 -- # set +x 00:03:22.330 ************************************ 00:03:22.330 END TEST per_node_1G_alloc 00:03:22.330 ************************************ 00:03:22.330 19:03:59 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:22.330 19:03:59 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:22.330 19:03:59 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:22.330 19:03:59 -- common/autotest_common.sh@10 -- # set +x 00:03:22.330 ************************************ 00:03:22.330 START TEST even_2G_alloc 00:03:22.330 ************************************ 00:03:22.330 19:03:59 -- common/autotest_common.sh@1102 -- # even_2G_alloc 00:03:22.330 19:03:59 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:22.330 19:03:59 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:22.330 19:03:59 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:22.330 19:03:59 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:22.330 19:03:59 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:22.330 19:03:59 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:22.330 19:03:59 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:22.330 19:03:59 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.330 19:03:59 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:22.330 19:03:59 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:22.330 19:03:59 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.330 19:03:59 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.330 19:03:59 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:22.330 19:03:59 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:22.330 19:03:59 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.330 19:03:59 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:22.330 19:03:59 -- setup/hugepages.sh@83 -- # : 0 00:03:22.330 19:03:59 -- setup/hugepages.sh@84 -- # : 0 00:03:22.330 19:03:59 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.330 19:03:59 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:22.330 19:03:59 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:22.330 19:03:59 -- setup/hugepages.sh@153 -- # setup output 00:03:22.330 19:03:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.330 19:03:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:22.589 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:22.589 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:22.589 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:22.852 19:04:00 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:22.852 19:04:00 -- setup/hugepages.sh@89 -- # local node 00:03:22.852 19:04:00 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.852 19:04:00 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.852 19:04:00 -- setup/hugepages.sh@92 -- # local surp 00:03:22.852 19:04:00 -- setup/hugepages.sh@93 -- # local resv 00:03:22.852 19:04:00 -- setup/hugepages.sh@94 -- # local anon 00:03:22.852 19:04:00 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.852 19:04:00 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.852 19:04:00 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.852 19:04:00 -- setup/common.sh@18 -- # local node= 00:03:22.852 19:04:00 -- setup/common.sh@19 -- # local var val 00:03:22.852 19:04:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.852 19:04:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.852 19:04:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.852 19:04:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.852 19:04:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.852 19:04:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.852 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.852 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.852 19:04:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7491644 kB' 'MemAvailable: 9413968 kB' 'Buffers: 2436 kB' 'Cached: 2132220 kB' 'SwapCached: 0 kB' 'Active: 889220 kB' 'Inactive: 1365008 kB' 'Active(anon): 130036 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1365008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 121204 kB' 'Mapped: 48700 kB' 'Shmem: 10464 kB' 'KReclaimable: 70144 kB' 'Slab: 143848 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73704 kB' 'KernelStack: 6480 kB' 'PageTables: 4576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:03:22.852 19:04:00 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.852 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.852 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.852 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.852 19:04:00 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.852 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.852 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.852 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.852 19:04:00 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.852 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.852 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.852 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.852 19:04:00 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.852 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.852 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.852 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.852 19:04:00 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.852 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.852 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.852 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.852 19:04:00 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.852 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.852 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.852 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.852 19:04:00 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.852 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.852 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.852 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.852 19:04:00 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.852 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.852 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.852 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.852 19:04:00 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.852 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.852 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.852 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.852 19:04:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.852 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.852 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.852 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.852 19:04:00 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.852 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.852 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.852 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.852 19:04:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.852 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.852 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.852 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.852 19:04:00 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.853 19:04:00 -- setup/common.sh@33 -- # echo 0 00:03:22.853 19:04:00 -- setup/common.sh@33 -- # return 0 00:03:22.853 19:04:00 -- setup/hugepages.sh@97 -- # anon=0 00:03:22.853 19:04:00 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.853 19:04:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.853 19:04:00 -- setup/common.sh@18 -- # local node= 00:03:22.853 19:04:00 -- setup/common.sh@19 -- # local var val 00:03:22.853 19:04:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.853 19:04:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.853 19:04:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.853 19:04:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.853 19:04:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.853 19:04:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7491644 kB' 'MemAvailable: 9413968 kB' 'Buffers: 2436 kB' 'Cached: 2132220 kB' 'SwapCached: 0 kB' 'Active: 888840 kB' 'Inactive: 1365008 kB' 'Active(anon): 129656 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1365008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120728 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 70144 kB' 'Slab: 143852 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73708 kB' 'KernelStack: 6448 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.853 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.853 19:04:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.854 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.854 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.855 19:04:00 -- setup/common.sh@33 -- # echo 0 00:03:22.855 19:04:00 -- setup/common.sh@33 -- # return 0 00:03:22.855 19:04:00 -- setup/hugepages.sh@99 -- # surp=0 00:03:22.855 19:04:00 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.855 19:04:00 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.855 19:04:00 -- setup/common.sh@18 -- # local node= 00:03:22.855 19:04:00 -- setup/common.sh@19 -- # local var val 00:03:22.855 19:04:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.855 19:04:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.855 19:04:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.855 19:04:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.855 19:04:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.855 19:04:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7491904 kB' 'MemAvailable: 9414228 kB' 'Buffers: 2436 kB' 'Cached: 2132220 kB' 'SwapCached: 0 kB' 'Active: 888836 kB' 'Inactive: 1365008 kB' 'Active(anon): 129652 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1365008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120748 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 70144 kB' 'Slab: 143844 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73700 kB' 'KernelStack: 6416 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.855 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.855 19:04:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.856 19:04:00 -- setup/common.sh@33 -- # echo 0 00:03:22.856 19:04:00 -- setup/common.sh@33 -- # return 0 00:03:22.856 19:04:00 -- setup/hugepages.sh@100 -- # resv=0 00:03:22.856 nr_hugepages=1024 00:03:22.856 19:04:00 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:22.856 resv_hugepages=0 00:03:22.856 19:04:00 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.856 surplus_hugepages=0 00:03:22.856 19:04:00 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.856 anon_hugepages=0 00:03:22.856 19:04:00 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.856 19:04:00 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.856 19:04:00 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:22.856 19:04:00 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.856 19:04:00 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.856 19:04:00 -- setup/common.sh@18 -- # local node= 00:03:22.856 19:04:00 -- setup/common.sh@19 -- # local var val 00:03:22.856 19:04:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.856 19:04:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.856 19:04:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.856 19:04:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.856 19:04:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.856 19:04:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7492508 kB' 'MemAvailable: 9414832 kB' 'Buffers: 2436 kB' 'Cached: 2132220 kB' 'SwapCached: 0 kB' 'Active: 888520 kB' 'Inactive: 1365008 kB' 'Active(anon): 129336 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1365008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120484 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 70144 kB' 'Slab: 143832 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73688 kB' 'KernelStack: 6400 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.856 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.856 19:04:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.857 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.857 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.858 19:04:00 -- setup/common.sh@33 -- # echo 1024 00:03:22.858 19:04:00 -- setup/common.sh@33 -- # return 0 00:03:22.858 19:04:00 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.858 19:04:00 -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.858 19:04:00 -- setup/hugepages.sh@27 -- # local node 00:03:22.858 19:04:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.858 19:04:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:22.858 19:04:00 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:22.858 19:04:00 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.858 19:04:00 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.858 19:04:00 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.858 19:04:00 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.858 19:04:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.858 19:04:00 -- setup/common.sh@18 -- # local node=0 00:03:22.858 19:04:00 -- setup/common.sh@19 -- # local var val 00:03:22.858 19:04:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.858 19:04:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.858 19:04:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.858 19:04:00 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.858 19:04:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.858 19:04:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7492008 kB' 'MemUsed: 4749968 kB' 'SwapCached: 0 kB' 'Active: 888520 kB' 'Inactive: 1365008 kB' 'Active(anon): 129336 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1365008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 2134656 kB' 'Mapped: 48604 kB' 'AnonPages: 120744 kB' 'Shmem: 10464 kB' 'KernelStack: 6400 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70144 kB' 'Slab: 143832 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73688 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.858 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.858 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.859 19:04:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.859 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.859 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.859 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.859 19:04:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.859 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.859 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.859 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.859 19:04:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.859 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.859 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.859 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.859 19:04:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.859 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.859 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.859 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.859 19:04:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.859 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.859 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.859 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.859 19:04:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.859 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.859 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.859 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.859 19:04:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.859 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.859 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.859 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.859 19:04:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.859 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.859 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.859 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.859 19:04:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.859 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.859 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.859 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.859 19:04:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.859 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.859 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.859 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.859 19:04:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.859 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.859 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.859 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.859 19:04:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.859 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.859 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.859 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.859 19:04:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.859 19:04:00 -- setup/common.sh@32 -- # continue 00:03:22.859 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.859 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.859 19:04:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.859 19:04:00 -- setup/common.sh@33 -- # echo 0 00:03:22.859 19:04:00 -- setup/common.sh@33 -- # return 0 00:03:22.859 19:04:00 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.859 19:04:00 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.859 19:04:00 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.859 19:04:00 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.859 node0=1024 expecting 1024 00:03:22.859 19:04:00 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:22.859 19:04:00 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:22.859 00:03:22.859 real 0m0.499s 00:03:22.859 user 0m0.244s 00:03:22.859 sys 0m0.289s 00:03:22.859 19:04:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:22.859 19:04:00 -- common/autotest_common.sh@10 -- # set +x 00:03:22.859 ************************************ 00:03:22.859 END TEST even_2G_alloc 00:03:22.859 ************************************ 00:03:22.859 19:04:00 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:22.859 19:04:00 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:22.859 19:04:00 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:22.859 19:04:00 -- common/autotest_common.sh@10 -- # set +x 00:03:22.859 ************************************ 00:03:22.859 START TEST odd_alloc 00:03:22.859 ************************************ 00:03:22.859 19:04:00 -- common/autotest_common.sh@1102 -- # odd_alloc 00:03:22.859 19:04:00 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:22.859 19:04:00 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:22.859 19:04:00 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:22.859 19:04:00 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:22.859 19:04:00 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:22.859 19:04:00 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:22.859 19:04:00 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:22.859 19:04:00 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.859 19:04:00 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:22.859 19:04:00 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:22.859 19:04:00 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.859 19:04:00 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.859 19:04:00 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:22.859 19:04:00 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:22.859 19:04:00 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.859 19:04:00 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:22.859 19:04:00 -- setup/hugepages.sh@83 -- # : 0 00:03:22.859 19:04:00 -- setup/hugepages.sh@84 -- # : 0 00:03:22.859 19:04:00 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.859 19:04:00 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:22.859 19:04:00 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:22.859 19:04:00 -- setup/hugepages.sh@160 -- # setup output 00:03:22.859 19:04:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.859 19:04:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:23.431 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:23.431 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:23.431 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:23.431 19:04:00 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:23.431 19:04:00 -- setup/hugepages.sh@89 -- # local node 00:03:23.431 19:04:00 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:23.431 19:04:00 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:23.431 19:04:00 -- setup/hugepages.sh@92 -- # local surp 00:03:23.431 19:04:00 -- setup/hugepages.sh@93 -- # local resv 00:03:23.431 19:04:00 -- setup/hugepages.sh@94 -- # local anon 00:03:23.431 19:04:00 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.431 19:04:00 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:23.431 19:04:00 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.431 19:04:00 -- setup/common.sh@18 -- # local node= 00:03:23.431 19:04:00 -- setup/common.sh@19 -- # local var val 00:03:23.431 19:04:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.431 19:04:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.431 19:04:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.431 19:04:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.431 19:04:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.431 19:04:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.431 19:04:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7483256 kB' 'MemAvailable: 9405580 kB' 'Buffers: 2436 kB' 'Cached: 2132220 kB' 'SwapCached: 0 kB' 'Active: 888912 kB' 'Inactive: 1365008 kB' 'Active(anon): 129728 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1365008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120832 kB' 'Mapped: 48804 kB' 'Shmem: 10464 kB' 'KReclaimable: 70144 kB' 'Slab: 143868 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73724 kB' 'KernelStack: 6392 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:03:23.431 19:04:00 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.431 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.431 19:04:00 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.431 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.431 19:04:00 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.431 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.431 19:04:00 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.431 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.431 19:04:00 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.431 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.431 19:04:00 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.431 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.431 19:04:00 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.431 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.431 19:04:00 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.431 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.431 19:04:00 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.431 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.431 19:04:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.431 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.431 19:04:00 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.431 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.431 19:04:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.431 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.431 19:04:00 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.431 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.431 19:04:00 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.431 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.431 19:04:00 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.431 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.431 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.432 19:04:00 -- setup/common.sh@33 -- # echo 0 00:03:23.432 19:04:00 -- setup/common.sh@33 -- # return 0 00:03:23.432 19:04:00 -- setup/hugepages.sh@97 -- # anon=0 00:03:23.432 19:04:00 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:23.432 19:04:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.432 19:04:00 -- setup/common.sh@18 -- # local node= 00:03:23.432 19:04:00 -- setup/common.sh@19 -- # local var val 00:03:23.432 19:04:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.432 19:04:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.432 19:04:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.432 19:04:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.432 19:04:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.432 19:04:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7483872 kB' 'MemAvailable: 9406196 kB' 'Buffers: 2436 kB' 'Cached: 2132220 kB' 'SwapCached: 0 kB' 'Active: 888836 kB' 'Inactive: 1365008 kB' 'Active(anon): 129652 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1365008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120800 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 70144 kB' 'Slab: 143880 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73736 kB' 'KernelStack: 6416 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.432 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.432 19:04:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.433 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.433 19:04:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.434 19:04:00 -- setup/common.sh@33 -- # echo 0 00:03:23.434 19:04:00 -- setup/common.sh@33 -- # return 0 00:03:23.434 19:04:00 -- setup/hugepages.sh@99 -- # surp=0 00:03:23.434 19:04:00 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:23.434 19:04:00 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:23.434 19:04:00 -- setup/common.sh@18 -- # local node= 00:03:23.434 19:04:00 -- setup/common.sh@19 -- # local var val 00:03:23.434 19:04:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.434 19:04:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.434 19:04:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.434 19:04:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.434 19:04:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.434 19:04:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7483872 kB' 'MemAvailable: 9406196 kB' 'Buffers: 2436 kB' 'Cached: 2132220 kB' 'SwapCached: 0 kB' 'Active: 888676 kB' 'Inactive: 1365008 kB' 'Active(anon): 129492 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1365008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120684 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 70144 kB' 'Slab: 143864 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73720 kB' 'KernelStack: 6448 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.434 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.434 19:04:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.435 19:04:00 -- setup/common.sh@33 -- # echo 0 00:03:23.435 19:04:00 -- setup/common.sh@33 -- # return 0 00:03:23.435 19:04:00 -- setup/hugepages.sh@100 -- # resv=0 00:03:23.435 nr_hugepages=1025 00:03:23.435 19:04:00 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:23.435 resv_hugepages=0 00:03:23.435 19:04:00 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:23.435 surplus_hugepages=0 00:03:23.435 19:04:00 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:23.435 anon_hugepages=0 00:03:23.435 19:04:00 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:23.435 19:04:00 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:23.435 19:04:00 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:23.435 19:04:00 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:23.435 19:04:00 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:23.435 19:04:00 -- setup/common.sh@18 -- # local node= 00:03:23.435 19:04:00 -- setup/common.sh@19 -- # local var val 00:03:23.435 19:04:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.435 19:04:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.435 19:04:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.435 19:04:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.435 19:04:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.435 19:04:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.435 19:04:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7484852 kB' 'MemAvailable: 9407176 kB' 'Buffers: 2436 kB' 'Cached: 2132220 kB' 'SwapCached: 0 kB' 'Active: 888552 kB' 'Inactive: 1365008 kB' 'Active(anon): 129368 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1365008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120592 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 70144 kB' 'Slab: 143856 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73712 kB' 'KernelStack: 6416 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.435 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.435 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.436 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.436 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.437 19:04:00 -- setup/common.sh@33 -- # echo 1025 00:03:23.437 19:04:00 -- setup/common.sh@33 -- # return 0 00:03:23.437 19:04:00 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:23.437 19:04:00 -- setup/hugepages.sh@112 -- # get_nodes 00:03:23.437 19:04:00 -- setup/hugepages.sh@27 -- # local node 00:03:23.437 19:04:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.437 19:04:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:23.437 19:04:00 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:23.437 19:04:00 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:23.437 19:04:00 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.437 19:04:00 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.437 19:04:00 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:23.437 19:04:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.437 19:04:00 -- setup/common.sh@18 -- # local node=0 00:03:23.437 19:04:00 -- setup/common.sh@19 -- # local var val 00:03:23.437 19:04:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.437 19:04:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.437 19:04:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:23.437 19:04:00 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:23.437 19:04:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.437 19:04:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7484852 kB' 'MemUsed: 4757124 kB' 'SwapCached: 0 kB' 'Active: 888508 kB' 'Inactive: 1365008 kB' 'Active(anon): 129324 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1365008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 2134656 kB' 'Mapped: 48604 kB' 'AnonPages: 120468 kB' 'Shmem: 10464 kB' 'KernelStack: 6368 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70144 kB' 'Slab: 143856 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73712 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.437 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.437 19:04:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.438 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.438 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.438 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.438 19:04:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.438 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.438 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.438 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.438 19:04:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.438 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.438 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.438 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.438 19:04:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.438 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.438 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.438 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.438 19:04:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.438 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.438 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.438 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.438 19:04:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.438 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.438 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.438 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.438 19:04:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.438 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.438 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.438 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.438 19:04:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.438 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.438 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.438 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.438 19:04:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.438 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.438 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.438 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.438 19:04:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.438 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.438 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.438 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.438 19:04:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.438 19:04:00 -- setup/common.sh@32 -- # continue 00:03:23.438 19:04:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.438 19:04:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.438 19:04:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.438 19:04:00 -- setup/common.sh@33 -- # echo 0 00:03:23.438 19:04:00 -- setup/common.sh@33 -- # return 0 00:03:23.438 19:04:00 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.438 19:04:00 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.438 19:04:00 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.438 19:04:00 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.438 node0=1025 expecting 1025 00:03:23.438 19:04:00 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:23.438 19:04:00 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:23.438 00:03:23.438 real 0m0.532s 00:03:23.438 user 0m0.253s 00:03:23.438 sys 0m0.313s 00:03:23.438 19:04:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:23.438 19:04:00 -- common/autotest_common.sh@10 -- # set +x 00:03:23.438 ************************************ 00:03:23.438 END TEST odd_alloc 00:03:23.438 ************************************ 00:03:23.438 19:04:00 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:23.438 19:04:00 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:23.438 19:04:00 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:23.438 19:04:00 -- common/autotest_common.sh@10 -- # set +x 00:03:23.438 ************************************ 00:03:23.438 START TEST custom_alloc 00:03:23.438 ************************************ 00:03:23.438 19:04:00 -- common/autotest_common.sh@1102 -- # custom_alloc 00:03:23.438 19:04:00 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:23.438 19:04:00 -- setup/hugepages.sh@169 -- # local node 00:03:23.438 19:04:00 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:23.438 19:04:00 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:23.438 19:04:00 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:23.438 19:04:00 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:23.438 19:04:00 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:23.438 19:04:00 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:23.438 19:04:00 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:23.438 19:04:00 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:23.438 19:04:00 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:23.438 19:04:00 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:23.438 19:04:00 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:23.438 19:04:00 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:23.438 19:04:00 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:23.438 19:04:00 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:23.438 19:04:00 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:23.438 19:04:00 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:23.438 19:04:00 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:23.438 19:04:00 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.438 19:04:00 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:23.438 19:04:00 -- setup/hugepages.sh@83 -- # : 0 00:03:23.438 19:04:00 -- setup/hugepages.sh@84 -- # : 0 00:03:23.438 19:04:00 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.438 19:04:00 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:23.438 19:04:00 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:23.438 19:04:00 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:23.438 19:04:00 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:23.438 19:04:00 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:23.438 19:04:00 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:23.438 19:04:00 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:23.438 19:04:00 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:23.438 19:04:00 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:23.438 19:04:00 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:23.438 19:04:00 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:23.438 19:04:00 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:23.438 19:04:00 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:23.438 19:04:00 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:23.438 19:04:00 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:23.438 19:04:00 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:23.438 19:04:00 -- setup/hugepages.sh@78 -- # return 0 00:03:23.438 19:04:00 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:23.438 19:04:00 -- setup/hugepages.sh@187 -- # setup output 00:03:23.438 19:04:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.438 19:04:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:23.697 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:23.959 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:23.959 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:23.959 19:04:01 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:23.959 19:04:01 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:23.959 19:04:01 -- setup/hugepages.sh@89 -- # local node 00:03:23.959 19:04:01 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:23.959 19:04:01 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:23.959 19:04:01 -- setup/hugepages.sh@92 -- # local surp 00:03:23.959 19:04:01 -- setup/hugepages.sh@93 -- # local resv 00:03:23.959 19:04:01 -- setup/hugepages.sh@94 -- # local anon 00:03:23.959 19:04:01 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.959 19:04:01 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:23.959 19:04:01 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.959 19:04:01 -- setup/common.sh@18 -- # local node= 00:03:23.959 19:04:01 -- setup/common.sh@19 -- # local var val 00:03:23.959 19:04:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.959 19:04:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.959 19:04:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.959 19:04:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.959 19:04:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.959 19:04:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.959 19:04:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8545400 kB' 'MemAvailable: 10467724 kB' 'Buffers: 2436 kB' 'Cached: 2132220 kB' 'SwapCached: 0 kB' 'Active: 888820 kB' 'Inactive: 1365008 kB' 'Active(anon): 129636 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1365008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 121028 kB' 'Mapped: 48788 kB' 'Shmem: 10464 kB' 'KReclaimable: 70144 kB' 'Slab: 143856 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73712 kB' 'KernelStack: 6376 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.959 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.959 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.960 19:04:01 -- setup/common.sh@33 -- # echo 0 00:03:23.960 19:04:01 -- setup/common.sh@33 -- # return 0 00:03:23.960 19:04:01 -- setup/hugepages.sh@97 -- # anon=0 00:03:23.960 19:04:01 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:23.960 19:04:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.960 19:04:01 -- setup/common.sh@18 -- # local node= 00:03:23.960 19:04:01 -- setup/common.sh@19 -- # local var val 00:03:23.960 19:04:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.960 19:04:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.960 19:04:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.960 19:04:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.960 19:04:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.960 19:04:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8545496 kB' 'MemAvailable: 10467820 kB' 'Buffers: 2436 kB' 'Cached: 2132220 kB' 'SwapCached: 0 kB' 'Active: 888812 kB' 'Inactive: 1365008 kB' 'Active(anon): 129628 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1365008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 120768 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 70144 kB' 'Slab: 143876 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73732 kB' 'KernelStack: 6400 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.960 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.960 19:04:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.961 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.961 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.962 19:04:01 -- setup/common.sh@33 -- # echo 0 00:03:23.962 19:04:01 -- setup/common.sh@33 -- # return 0 00:03:23.962 19:04:01 -- setup/hugepages.sh@99 -- # surp=0 00:03:23.962 19:04:01 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:23.962 19:04:01 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:23.962 19:04:01 -- setup/common.sh@18 -- # local node= 00:03:23.962 19:04:01 -- setup/common.sh@19 -- # local var val 00:03:23.962 19:04:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.962 19:04:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.962 19:04:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.962 19:04:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.962 19:04:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.962 19:04:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8545496 kB' 'MemAvailable: 10467820 kB' 'Buffers: 2436 kB' 'Cached: 2132220 kB' 'SwapCached: 0 kB' 'Active: 888588 kB' 'Inactive: 1365008 kB' 'Active(anon): 129404 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1365008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120828 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 70144 kB' 'Slab: 143848 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73704 kB' 'KernelStack: 6416 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 19:04:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.963 19:04:01 -- setup/common.sh@33 -- # echo 0 00:03:23.963 19:04:01 -- setup/common.sh@33 -- # return 0 00:03:23.963 19:04:01 -- setup/hugepages.sh@100 -- # resv=0 00:03:23.963 nr_hugepages=512 00:03:23.963 resv_hugepages=0 00:03:23.963 surplus_hugepages=0 00:03:23.963 anon_hugepages=0 00:03:23.963 19:04:01 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:23.963 19:04:01 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:23.963 19:04:01 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:23.963 19:04:01 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:23.963 19:04:01 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:23.963 19:04:01 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:23.963 19:04:01 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:23.963 19:04:01 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:23.963 19:04:01 -- setup/common.sh@18 -- # local node= 00:03:23.963 19:04:01 -- setup/common.sh@19 -- # local var val 00:03:23.963 19:04:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.963 19:04:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.963 19:04:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.963 19:04:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.963 19:04:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.963 19:04:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 19:04:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8545496 kB' 'MemAvailable: 10467820 kB' 'Buffers: 2436 kB' 'Cached: 2132220 kB' 'SwapCached: 0 kB' 'Active: 888512 kB' 'Inactive: 1365008 kB' 'Active(anon): 129328 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1365008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120724 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 70144 kB' 'Slab: 143844 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73700 kB' 'KernelStack: 6400 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.963 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.963 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 19:04:01 -- setup/common.sh@33 -- # echo 512 00:03:23.966 19:04:01 -- setup/common.sh@33 -- # return 0 00:03:23.966 19:04:01 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:23.966 19:04:01 -- setup/hugepages.sh@112 -- # get_nodes 00:03:23.966 19:04:01 -- setup/hugepages.sh@27 -- # local node 00:03:23.966 19:04:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.966 19:04:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:23.966 19:04:01 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:23.966 19:04:01 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:23.966 19:04:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.966 19:04:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.966 19:04:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:23.966 19:04:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.966 19:04:01 -- setup/common.sh@18 -- # local node=0 00:03:23.966 19:04:01 -- setup/common.sh@19 -- # local var val 00:03:23.966 19:04:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.966 19:04:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.966 19:04:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:23.966 19:04:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:23.966 19:04:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.966 19:04:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8546108 kB' 'MemUsed: 3695868 kB' 'SwapCached: 0 kB' 'Active: 888588 kB' 'Inactive: 1365008 kB' 'Active(anon): 129404 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1365008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 2134656 kB' 'Mapped: 48604 kB' 'AnonPages: 120824 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70144 kB' 'Slab: 143832 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73688 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.966 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.966 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # continue 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 19:04:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.967 19:04:01 -- setup/common.sh@33 -- # echo 0 00:03:23.967 19:04:01 -- setup/common.sh@33 -- # return 0 00:03:23.967 node0=512 expecting 512 00:03:23.967 19:04:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.967 19:04:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.967 19:04:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.967 19:04:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.967 19:04:01 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:23.967 19:04:01 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:23.967 00:03:23.967 real 0m0.544s 00:03:23.967 user 0m0.258s 00:03:23.967 sys 0m0.309s 00:03:23.967 19:04:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:23.967 ************************************ 00:03:23.967 END TEST custom_alloc 00:03:23.967 ************************************ 00:03:23.967 19:04:01 -- common/autotest_common.sh@10 -- # set +x 00:03:24.226 19:04:01 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:24.226 19:04:01 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:24.226 19:04:01 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:24.226 19:04:01 -- common/autotest_common.sh@10 -- # set +x 00:03:24.226 ************************************ 00:03:24.226 START TEST no_shrink_alloc 00:03:24.226 ************************************ 00:03:24.226 19:04:01 -- common/autotest_common.sh@1102 -- # no_shrink_alloc 00:03:24.226 19:04:01 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:24.226 19:04:01 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:24.226 19:04:01 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:24.226 19:04:01 -- setup/hugepages.sh@51 -- # shift 00:03:24.226 19:04:01 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:24.226 19:04:01 -- setup/hugepages.sh@52 -- # local node_ids 00:03:24.226 19:04:01 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.226 19:04:01 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:24.226 19:04:01 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:24.226 19:04:01 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:24.226 19:04:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.226 19:04:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:24.226 19:04:01 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:24.226 19:04:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.226 19:04:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.226 19:04:01 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:24.226 19:04:01 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:24.226 19:04:01 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:24.226 19:04:01 -- setup/hugepages.sh@73 -- # return 0 00:03:24.226 19:04:01 -- setup/hugepages.sh@198 -- # setup output 00:03:24.226 19:04:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.226 19:04:01 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:24.487 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:24.487 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:24.487 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:24.487 19:04:01 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:24.487 19:04:01 -- setup/hugepages.sh@89 -- # local node 00:03:24.487 19:04:01 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:24.487 19:04:01 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:24.487 19:04:01 -- setup/hugepages.sh@92 -- # local surp 00:03:24.487 19:04:01 -- setup/hugepages.sh@93 -- # local resv 00:03:24.487 19:04:01 -- setup/hugepages.sh@94 -- # local anon 00:03:24.487 19:04:01 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:24.487 19:04:01 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:24.487 19:04:01 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:24.487 19:04:01 -- setup/common.sh@18 -- # local node= 00:03:24.487 19:04:01 -- setup/common.sh@19 -- # local var val 00:03:24.487 19:04:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.487 19:04:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.487 19:04:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.487 19:04:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.487 19:04:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.487 19:04:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 19:04:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7499884 kB' 'MemAvailable: 9422208 kB' 'Buffers: 2436 kB' 'Cached: 2132220 kB' 'SwapCached: 0 kB' 'Active: 888888 kB' 'Inactive: 1365008 kB' 'Active(anon): 129704 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1365008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120796 kB' 'Mapped: 48732 kB' 'Shmem: 10464 kB' 'KReclaimable: 70144 kB' 'Slab: 143860 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73716 kB' 'KernelStack: 6408 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.487 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 19:04:01 -- setup/common.sh@33 -- # echo 0 00:03:24.488 19:04:01 -- setup/common.sh@33 -- # return 0 00:03:24.488 19:04:01 -- setup/hugepages.sh@97 -- # anon=0 00:03:24.488 19:04:01 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:24.488 19:04:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.488 19:04:01 -- setup/common.sh@18 -- # local node= 00:03:24.488 19:04:01 -- setup/common.sh@19 -- # local var val 00:03:24.488 19:04:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.488 19:04:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.488 19:04:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.488 19:04:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.488 19:04:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.488 19:04:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7499884 kB' 'MemAvailable: 9422208 kB' 'Buffers: 2436 kB' 'Cached: 2132220 kB' 'SwapCached: 0 kB' 'Active: 888588 kB' 'Inactive: 1365008 kB' 'Active(anon): 129404 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1365008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120516 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 70144 kB' 'Slab: 143860 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73716 kB' 'KernelStack: 6416 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.488 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 19:04:01 -- setup/common.sh@33 -- # echo 0 00:03:24.489 19:04:01 -- setup/common.sh@33 -- # return 0 00:03:24.489 19:04:01 -- setup/hugepages.sh@99 -- # surp=0 00:03:24.489 19:04:01 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:24.489 19:04:01 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:24.489 19:04:01 -- setup/common.sh@18 -- # local node= 00:03:24.489 19:04:01 -- setup/common.sh@19 -- # local var val 00:03:24.489 19:04:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.489 19:04:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.489 19:04:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.489 19:04:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.489 19:04:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.489 19:04:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7499884 kB' 'MemAvailable: 9422208 kB' 'Buffers: 2436 kB' 'Cached: 2132220 kB' 'SwapCached: 0 kB' 'Active: 888792 kB' 'Inactive: 1365008 kB' 'Active(anon): 129608 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1365008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120736 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 70144 kB' 'Slab: 143856 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73712 kB' 'KernelStack: 6384 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.489 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 19:04:01 -- setup/common.sh@33 -- # echo 0 00:03:24.491 19:04:01 -- setup/common.sh@33 -- # return 0 00:03:24.491 19:04:01 -- setup/hugepages.sh@100 -- # resv=0 00:03:24.491 nr_hugepages=1024 00:03:24.491 resv_hugepages=0 00:03:24.491 19:04:01 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:24.491 19:04:01 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:24.491 surplus_hugepages=0 00:03:24.491 19:04:01 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:24.491 anon_hugepages=0 00:03:24.491 19:04:01 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:24.491 19:04:01 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.491 19:04:01 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:24.491 19:04:01 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:24.491 19:04:01 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:24.491 19:04:01 -- setup/common.sh@18 -- # local node= 00:03:24.491 19:04:01 -- setup/common.sh@19 -- # local var val 00:03:24.491 19:04:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.491 19:04:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.491 19:04:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.491 19:04:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.491 19:04:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.491 19:04:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 19:04:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7500156 kB' 'MemAvailable: 9422480 kB' 'Buffers: 2436 kB' 'Cached: 2132220 kB' 'SwapCached: 0 kB' 'Active: 888660 kB' 'Inactive: 1365008 kB' 'Active(anon): 129476 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1365008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120828 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 70144 kB' 'Slab: 143844 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73700 kB' 'KernelStack: 6416 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.491 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.752 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.752 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.753 19:04:01 -- setup/common.sh@33 -- # echo 1024 00:03:24.753 19:04:01 -- setup/common.sh@33 -- # return 0 00:03:24.753 19:04:01 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.753 19:04:01 -- setup/hugepages.sh@112 -- # get_nodes 00:03:24.753 19:04:01 -- setup/hugepages.sh@27 -- # local node 00:03:24.753 19:04:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.753 19:04:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:24.753 19:04:01 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:24.753 19:04:01 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.753 19:04:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.753 19:04:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.753 19:04:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.753 19:04:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.753 19:04:01 -- setup/common.sh@18 -- # local node=0 00:03:24.753 19:04:01 -- setup/common.sh@19 -- # local var val 00:03:24.753 19:04:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.753 19:04:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.753 19:04:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.753 19:04:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.753 19:04:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.753 19:04:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.753 19:04:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7500408 kB' 'MemUsed: 4741568 kB' 'SwapCached: 0 kB' 'Active: 888644 kB' 'Inactive: 1365008 kB' 'Active(anon): 129460 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1365008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 2134656 kB' 'Mapped: 48604 kB' 'AnonPages: 120860 kB' 'Shmem: 10464 kB' 'KernelStack: 6432 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70144 kB' 'Slab: 143844 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73700 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.753 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.753 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # continue 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.754 19:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.754 19:04:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.754 19:04:01 -- setup/common.sh@33 -- # echo 0 00:03:24.754 19:04:01 -- setup/common.sh@33 -- # return 0 00:03:24.754 19:04:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.754 19:04:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.754 19:04:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.754 19:04:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.754 node0=1024 expecting 1024 00:03:24.754 19:04:01 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:24.754 19:04:01 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:24.754 19:04:01 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:24.754 19:04:01 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:24.754 19:04:01 -- setup/hugepages.sh@202 -- # setup output 00:03:24.754 19:04:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.754 19:04:01 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:25.016 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:25.016 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:25.016 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:25.016 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:25.016 19:04:02 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:25.016 19:04:02 -- setup/hugepages.sh@89 -- # local node 00:03:25.016 19:04:02 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:25.016 19:04:02 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:25.016 19:04:02 -- setup/hugepages.sh@92 -- # local surp 00:03:25.016 19:04:02 -- setup/hugepages.sh@93 -- # local resv 00:03:25.016 19:04:02 -- setup/hugepages.sh@94 -- # local anon 00:03:25.016 19:04:02 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:25.016 19:04:02 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:25.016 19:04:02 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:25.016 19:04:02 -- setup/common.sh@18 -- # local node= 00:03:25.016 19:04:02 -- setup/common.sh@19 -- # local var val 00:03:25.016 19:04:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.016 19:04:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.016 19:04:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.016 19:04:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.016 19:04:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.016 19:04:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.016 19:04:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7506584 kB' 'MemAvailable: 9428912 kB' 'Buffers: 2436 kB' 'Cached: 2132224 kB' 'SwapCached: 0 kB' 'Active: 889116 kB' 'Inactive: 1365012 kB' 'Active(anon): 129932 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1365012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 121116 kB' 'Mapped: 48668 kB' 'Shmem: 10464 kB' 'KReclaimable: 70144 kB' 'Slab: 143864 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73720 kB' 'KernelStack: 6424 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.016 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.016 19:04:02 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.017 19:04:02 -- setup/common.sh@33 -- # echo 0 00:03:25.017 19:04:02 -- setup/common.sh@33 -- # return 0 00:03:25.017 19:04:02 -- setup/hugepages.sh@97 -- # anon=0 00:03:25.017 19:04:02 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:25.017 19:04:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.017 19:04:02 -- setup/common.sh@18 -- # local node= 00:03:25.017 19:04:02 -- setup/common.sh@19 -- # local var val 00:03:25.017 19:04:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.017 19:04:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.017 19:04:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.017 19:04:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.017 19:04:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.017 19:04:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7506584 kB' 'MemAvailable: 9428912 kB' 'Buffers: 2436 kB' 'Cached: 2132224 kB' 'SwapCached: 0 kB' 'Active: 888868 kB' 'Inactive: 1365012 kB' 'Active(anon): 129684 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1365012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120852 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 70144 kB' 'Slab: 143880 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73736 kB' 'KernelStack: 6416 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.017 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.017 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.018 19:04:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.018 19:04:02 -- setup/common.sh@33 -- # echo 0 00:03:25.018 19:04:02 -- setup/common.sh@33 -- # return 0 00:03:25.018 19:04:02 -- setup/hugepages.sh@99 -- # surp=0 00:03:25.018 19:04:02 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:25.018 19:04:02 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:25.018 19:04:02 -- setup/common.sh@18 -- # local node= 00:03:25.018 19:04:02 -- setup/common.sh@19 -- # local var val 00:03:25.018 19:04:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.018 19:04:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.018 19:04:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.018 19:04:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.018 19:04:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.018 19:04:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.018 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7506584 kB' 'MemAvailable: 9428912 kB' 'Buffers: 2436 kB' 'Cached: 2132224 kB' 'SwapCached: 0 kB' 'Active: 888892 kB' 'Inactive: 1365012 kB' 'Active(anon): 129708 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1365012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120852 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 70144 kB' 'Slab: 143876 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73732 kB' 'KernelStack: 6416 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.019 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.019 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.020 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.020 19:04:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.020 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.020 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.020 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.020 19:04:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.020 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.020 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.020 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.020 19:04:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.020 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.020 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.020 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.020 19:04:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.020 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.020 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.020 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.020 19:04:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.020 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.020 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.020 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.020 19:04:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.020 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.020 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.020 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.020 19:04:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.020 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.020 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.020 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.020 19:04:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.020 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.020 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.020 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.020 19:04:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.020 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.020 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.020 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.020 19:04:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.020 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.281 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.281 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.281 19:04:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.281 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.281 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.281 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.281 19:04:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.281 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.281 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.281 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.281 19:04:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.281 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.281 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.281 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.281 19:04:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.281 19:04:02 -- setup/common.sh@33 -- # echo 0 00:03:25.281 19:04:02 -- setup/common.sh@33 -- # return 0 00:03:25.281 19:04:02 -- setup/hugepages.sh@100 -- # resv=0 00:03:25.281 nr_hugepages=1024 00:03:25.281 19:04:02 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:25.281 resv_hugepages=0 00:03:25.281 19:04:02 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:25.281 surplus_hugepages=0 00:03:25.281 19:04:02 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:25.281 anon_hugepages=0 00:03:25.281 19:04:02 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:25.281 19:04:02 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:25.281 19:04:02 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:25.281 19:04:02 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:25.281 19:04:02 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:25.281 19:04:02 -- setup/common.sh@18 -- # local node= 00:03:25.281 19:04:02 -- setup/common.sh@19 -- # local var val 00:03:25.281 19:04:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.281 19:04:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.281 19:04:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.281 19:04:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.281 19:04:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.281 19:04:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.281 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.281 19:04:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7506584 kB' 'MemAvailable: 9428912 kB' 'Buffers: 2436 kB' 'Cached: 2132224 kB' 'SwapCached: 0 kB' 'Active: 888852 kB' 'Inactive: 1365012 kB' 'Active(anon): 129668 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1365012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120780 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 70144 kB' 'Slab: 143876 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73732 kB' 'KernelStack: 6400 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:03:25.281 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.281 19:04:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.281 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.281 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.282 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.282 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.283 19:04:02 -- setup/common.sh@33 -- # echo 1024 00:03:25.283 19:04:02 -- setup/common.sh@33 -- # return 0 00:03:25.283 19:04:02 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:25.283 19:04:02 -- setup/hugepages.sh@112 -- # get_nodes 00:03:25.283 19:04:02 -- setup/hugepages.sh@27 -- # local node 00:03:25.283 19:04:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.283 19:04:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:25.283 19:04:02 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:25.283 19:04:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:25.283 19:04:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.283 19:04:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.283 19:04:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:25.283 19:04:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.283 19:04:02 -- setup/common.sh@18 -- # local node=0 00:03:25.283 19:04:02 -- setup/common.sh@19 -- # local var val 00:03:25.283 19:04:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.283 19:04:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.283 19:04:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:25.283 19:04:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:25.283 19:04:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.283 19:04:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7506584 kB' 'MemUsed: 4735392 kB' 'SwapCached: 0 kB' 'Active: 888876 kB' 'Inactive: 1365012 kB' 'Active(anon): 129692 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1365012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 2134660 kB' 'Mapped: 48604 kB' 'AnonPages: 120796 kB' 'Shmem: 10464 kB' 'KernelStack: 6400 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70144 kB' 'Slab: 143876 kB' 'SReclaimable: 70144 kB' 'SUnreclaim: 73732 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.283 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.283 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.284 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.284 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.284 19:04:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.284 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.284 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.284 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.284 19:04:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.284 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.284 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.284 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.284 19:04:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.284 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.284 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.284 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.284 19:04:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.284 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.284 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.284 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.284 19:04:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.284 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.284 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.284 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.284 19:04:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.284 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.284 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.284 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.284 19:04:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.284 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.284 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.284 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.284 19:04:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.284 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.284 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.284 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.284 19:04:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.284 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.284 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.284 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.284 19:04:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.284 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.284 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.284 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.284 19:04:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.284 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.284 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.284 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.284 19:04:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.284 19:04:02 -- setup/common.sh@32 -- # continue 00:03:25.284 19:04:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.284 19:04:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.284 19:04:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.284 19:04:02 -- setup/common.sh@33 -- # echo 0 00:03:25.284 19:04:02 -- setup/common.sh@33 -- # return 0 00:03:25.284 19:04:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.284 19:04:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.284 19:04:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.284 19:04:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.284 node0=1024 expecting 1024 00:03:25.284 19:04:02 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:25.284 19:04:02 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:25.284 00:03:25.284 real 0m1.113s 00:03:25.284 user 0m0.541s 00:03:25.284 sys 0m0.617s 00:03:25.284 19:04:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:25.284 19:04:02 -- common/autotest_common.sh@10 -- # set +x 00:03:25.284 ************************************ 00:03:25.284 END TEST no_shrink_alloc 00:03:25.284 ************************************ 00:03:25.284 19:04:02 -- setup/hugepages.sh@217 -- # clear_hp 00:03:25.284 19:04:02 -- setup/hugepages.sh@37 -- # local node hp 00:03:25.284 19:04:02 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:25.284 19:04:02 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:25.284 19:04:02 -- setup/hugepages.sh@41 -- # echo 0 00:03:25.284 19:04:02 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:25.284 19:04:02 -- setup/hugepages.sh@41 -- # echo 0 00:03:25.284 19:04:02 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:25.284 19:04:02 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:25.284 00:03:25.284 real 0m4.752s 00:03:25.284 user 0m2.225s 00:03:25.284 sys 0m2.635s 00:03:25.284 19:04:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:25.284 ************************************ 00:03:25.284 END TEST hugepages 00:03:25.284 ************************************ 00:03:25.284 19:04:02 -- common/autotest_common.sh@10 -- # set +x 00:03:25.284 19:04:02 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:25.284 19:04:02 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:25.284 19:04:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:25.284 19:04:02 -- common/autotest_common.sh@10 -- # set +x 00:03:25.284 ************************************ 00:03:25.284 START TEST driver 00:03:25.284 ************************************ 00:03:25.284 19:04:02 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:25.284 * Looking for test storage... 00:03:25.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:25.284 19:04:02 -- setup/driver.sh@68 -- # setup reset 00:03:25.284 19:04:02 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:25.284 19:04:02 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:25.852 19:04:03 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:25.852 19:04:03 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:25.852 19:04:03 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:25.852 19:04:03 -- common/autotest_common.sh@10 -- # set +x 00:03:25.852 ************************************ 00:03:25.852 START TEST guess_driver 00:03:25.852 ************************************ 00:03:25.852 19:04:03 -- common/autotest_common.sh@1102 -- # guess_driver 00:03:25.852 19:04:03 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:25.852 19:04:03 -- setup/driver.sh@47 -- # local fail=0 00:03:25.852 19:04:03 -- setup/driver.sh@49 -- # pick_driver 00:03:25.853 19:04:03 -- setup/driver.sh@36 -- # vfio 00:03:25.853 19:04:03 -- setup/driver.sh@21 -- # local iommu_grups 00:03:25.853 19:04:03 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:25.853 19:04:03 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:25.853 19:04:03 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:25.853 19:04:03 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:25.853 19:04:03 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:25.853 19:04:03 -- setup/driver.sh@32 -- # return 1 00:03:25.853 19:04:03 -- setup/driver.sh@38 -- # uio 00:03:25.853 19:04:03 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:25.853 19:04:03 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:25.853 19:04:03 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:25.853 19:04:03 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:25.853 19:04:03 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:25.853 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:25.853 19:04:03 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:25.853 19:04:03 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:25.853 19:04:03 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:25.853 Looking for driver=uio_pci_generic 00:03:25.853 19:04:03 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:25.853 19:04:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:25.853 19:04:03 -- setup/driver.sh@45 -- # setup output config 00:03:25.853 19:04:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.853 19:04:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:26.789 19:04:03 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:26.789 19:04:03 -- setup/driver.sh@58 -- # continue 00:03:26.789 19:04:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.789 19:04:04 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.789 19:04:04 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:26.789 19:04:04 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.789 19:04:04 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.789 19:04:04 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:26.789 19:04:04 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.789 19:04:04 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:26.789 19:04:04 -- setup/driver.sh@65 -- # setup reset 00:03:26.789 19:04:04 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:26.789 19:04:04 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:27.357 00:03:27.357 real 0m1.476s 00:03:27.357 user 0m0.567s 00:03:27.357 sys 0m0.919s 00:03:27.357 ************************************ 00:03:27.357 END TEST guess_driver 00:03:27.357 ************************************ 00:03:27.357 19:04:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:27.357 19:04:04 -- common/autotest_common.sh@10 -- # set +x 00:03:27.357 00:03:27.357 real 0m2.161s 00:03:27.357 user 0m0.813s 00:03:27.357 sys 0m1.420s 00:03:27.357 19:04:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:27.357 19:04:04 -- common/autotest_common.sh@10 -- # set +x 00:03:27.357 ************************************ 00:03:27.357 END TEST driver 00:03:27.357 ************************************ 00:03:27.616 19:04:04 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:27.616 19:04:04 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:27.616 19:04:04 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:27.616 19:04:04 -- common/autotest_common.sh@10 -- # set +x 00:03:27.616 ************************************ 00:03:27.616 START TEST devices 00:03:27.616 ************************************ 00:03:27.616 19:04:04 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:27.616 * Looking for test storage... 00:03:27.616 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:27.616 19:04:04 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:27.616 19:04:04 -- setup/devices.sh@192 -- # setup reset 00:03:27.616 19:04:04 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:27.616 19:04:04 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:28.552 19:04:05 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:28.552 19:04:05 -- common/autotest_common.sh@1652 -- # zoned_devs=() 00:03:28.552 19:04:05 -- common/autotest_common.sh@1652 -- # local -gA zoned_devs 00:03:28.552 19:04:05 -- common/autotest_common.sh@1653 -- # local nvme bdf 00:03:28.552 19:04:05 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:28.552 19:04:05 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme0n1 00:03:28.552 19:04:05 -- common/autotest_common.sh@1645 -- # local device=nvme0n1 00:03:28.552 19:04:05 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:28.552 19:04:05 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:03:28.552 19:04:05 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:28.552 19:04:05 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n1 00:03:28.552 19:04:05 -- common/autotest_common.sh@1645 -- # local device=nvme1n1 00:03:28.552 19:04:05 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:28.552 19:04:05 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:03:28.552 19:04:05 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:28.552 19:04:05 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n2 00:03:28.552 19:04:05 -- common/autotest_common.sh@1645 -- # local device=nvme1n2 00:03:28.552 19:04:05 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:28.552 19:04:05 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:03:28.552 19:04:05 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:28.552 19:04:05 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n3 00:03:28.552 19:04:05 -- common/autotest_common.sh@1645 -- # local device=nvme1n3 00:03:28.552 19:04:05 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:28.552 19:04:05 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:03:28.552 19:04:05 -- setup/devices.sh@196 -- # blocks=() 00:03:28.552 19:04:05 -- setup/devices.sh@196 -- # declare -a blocks 00:03:28.552 19:04:05 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:28.552 19:04:05 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:28.552 19:04:05 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:28.552 19:04:05 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:28.552 19:04:05 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:28.552 19:04:05 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:28.552 19:04:05 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:03:28.552 19:04:05 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:28.552 19:04:05 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:28.552 19:04:05 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:03:28.552 19:04:05 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:28.552 No valid GPT data, bailing 00:03:28.552 19:04:05 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:28.552 19:04:05 -- scripts/common.sh@393 -- # pt= 00:03:28.552 19:04:05 -- scripts/common.sh@394 -- # return 1 00:03:28.552 19:04:05 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:28.552 19:04:05 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:28.552 19:04:05 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:28.552 19:04:05 -- setup/common.sh@80 -- # echo 5368709120 00:03:28.552 19:04:05 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:28.552 19:04:05 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:28.552 19:04:05 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:03:28.552 19:04:05 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:28.552 19:04:05 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:28.552 19:04:05 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:28.552 19:04:05 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:03:28.552 19:04:05 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:28.552 19:04:05 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:28.552 19:04:05 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:03:28.552 19:04:05 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:28.552 No valid GPT data, bailing 00:03:28.552 19:04:05 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:28.552 19:04:05 -- scripts/common.sh@393 -- # pt= 00:03:28.552 19:04:05 -- scripts/common.sh@394 -- # return 1 00:03:28.552 19:04:05 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:28.552 19:04:05 -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:28.552 19:04:05 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:28.552 19:04:05 -- setup/common.sh@80 -- # echo 4294967296 00:03:28.552 19:04:05 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:28.552 19:04:05 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:28.552 19:04:05 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:03:28.552 19:04:05 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:28.552 19:04:05 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:03:28.552 19:04:05 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:28.552 19:04:05 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:03:28.552 19:04:05 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:28.552 19:04:05 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:03:28.552 19:04:05 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:03:28.552 19:04:05 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:03:28.552 No valid GPT data, bailing 00:03:28.552 19:04:05 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:28.552 19:04:05 -- scripts/common.sh@393 -- # pt= 00:03:28.552 19:04:05 -- scripts/common.sh@394 -- # return 1 00:03:28.552 19:04:05 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:03:28.552 19:04:05 -- setup/common.sh@76 -- # local dev=nvme1n2 00:03:28.552 19:04:05 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:03:28.552 19:04:05 -- setup/common.sh@80 -- # echo 4294967296 00:03:28.552 19:04:05 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:28.552 19:04:05 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:28.552 19:04:05 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:03:28.552 19:04:05 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:28.552 19:04:05 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:03:28.552 19:04:05 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:28.552 19:04:05 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:03:28.552 19:04:05 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:28.552 19:04:05 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:03:28.552 19:04:05 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:03:28.552 19:04:05 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:03:28.811 No valid GPT data, bailing 00:03:28.811 19:04:05 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:28.811 19:04:06 -- scripts/common.sh@393 -- # pt= 00:03:28.811 19:04:06 -- scripts/common.sh@394 -- # return 1 00:03:28.811 19:04:06 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:03:28.811 19:04:06 -- setup/common.sh@76 -- # local dev=nvme1n3 00:03:28.811 19:04:06 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:03:28.811 19:04:06 -- setup/common.sh@80 -- # echo 4294967296 00:03:28.811 19:04:06 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:28.811 19:04:06 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:28.811 19:04:06 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:03:28.811 19:04:06 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:03:28.811 19:04:06 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:28.811 19:04:06 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:28.811 19:04:06 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:28.811 19:04:06 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:28.811 19:04:06 -- common/autotest_common.sh@10 -- # set +x 00:03:28.811 ************************************ 00:03:28.811 START TEST nvme_mount 00:03:28.811 ************************************ 00:03:28.811 19:04:06 -- common/autotest_common.sh@1102 -- # nvme_mount 00:03:28.811 19:04:06 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:28.811 19:04:06 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:28.811 19:04:06 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:28.811 19:04:06 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:28.811 19:04:06 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:28.811 19:04:06 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:28.811 19:04:06 -- setup/common.sh@40 -- # local part_no=1 00:03:28.811 19:04:06 -- setup/common.sh@41 -- # local size=1073741824 00:03:28.811 19:04:06 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:28.811 19:04:06 -- setup/common.sh@44 -- # parts=() 00:03:28.811 19:04:06 -- setup/common.sh@44 -- # local parts 00:03:28.811 19:04:06 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:28.811 19:04:06 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:28.811 19:04:06 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:28.811 19:04:06 -- setup/common.sh@46 -- # (( part++ )) 00:03:28.811 19:04:06 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:28.811 19:04:06 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:28.811 19:04:06 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:28.811 19:04:06 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:29.745 Creating new GPT entries in memory. 00:03:29.745 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:29.745 other utilities. 00:03:29.745 19:04:07 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:29.745 19:04:07 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:29.745 19:04:07 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:29.745 19:04:07 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:29.745 19:04:07 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:30.681 Creating new GPT entries in memory. 00:03:30.681 The operation has completed successfully. 00:03:30.681 19:04:08 -- setup/common.sh@57 -- # (( part++ )) 00:03:30.681 19:04:08 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:30.681 19:04:08 -- setup/common.sh@62 -- # wait 53817 00:03:30.940 19:04:08 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:30.940 19:04:08 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:03:30.940 19:04:08 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:30.940 19:04:08 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:30.940 19:04:08 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:30.940 19:04:08 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:30.940 19:04:08 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:30.940 19:04:08 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:30.940 19:04:08 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:30.940 19:04:08 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:30.940 19:04:08 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:30.940 19:04:08 -- setup/devices.sh@53 -- # local found=0 00:03:30.940 19:04:08 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:30.940 19:04:08 -- setup/devices.sh@56 -- # : 00:03:30.940 19:04:08 -- setup/devices.sh@59 -- # local pci status 00:03:30.940 19:04:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.940 19:04:08 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:30.940 19:04:08 -- setup/devices.sh@47 -- # setup output config 00:03:30.940 19:04:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.940 19:04:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:30.940 19:04:08 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:30.940 19:04:08 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:30.940 19:04:08 -- setup/devices.sh@63 -- # found=1 00:03:30.940 19:04:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.940 19:04:08 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:30.940 19:04:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.508 19:04:08 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:31.508 19:04:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.508 19:04:08 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:31.508 19:04:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.508 19:04:08 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:31.508 19:04:08 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:31.508 19:04:08 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:31.508 19:04:08 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:31.508 19:04:08 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:31.508 19:04:08 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:31.508 19:04:08 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:31.508 19:04:08 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:31.508 19:04:08 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:31.508 19:04:08 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:31.508 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:31.508 19:04:08 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:31.508 19:04:08 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:31.767 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:31.767 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:31.767 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:31.767 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:31.767 19:04:09 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:03:31.767 19:04:09 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:03:31.767 19:04:09 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:31.767 19:04:09 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:31.767 19:04:09 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:31.767 19:04:09 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:32.026 19:04:09 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:32.026 19:04:09 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:32.026 19:04:09 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:32.026 19:04:09 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:32.026 19:04:09 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:32.026 19:04:09 -- setup/devices.sh@53 -- # local found=0 00:03:32.026 19:04:09 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:32.026 19:04:09 -- setup/devices.sh@56 -- # : 00:03:32.026 19:04:09 -- setup/devices.sh@59 -- # local pci status 00:03:32.026 19:04:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.026 19:04:09 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:32.026 19:04:09 -- setup/devices.sh@47 -- # setup output config 00:03:32.026 19:04:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.026 19:04:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:32.026 19:04:09 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:32.026 19:04:09 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:32.026 19:04:09 -- setup/devices.sh@63 -- # found=1 00:03:32.026 19:04:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.026 19:04:09 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:32.026 19:04:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.595 19:04:09 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:32.595 19:04:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.595 19:04:09 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:32.595 19:04:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.595 19:04:09 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:32.595 19:04:09 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:32.595 19:04:09 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:32.595 19:04:09 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:32.595 19:04:09 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:32.595 19:04:09 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:32.595 19:04:09 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:03:32.595 19:04:09 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:32.595 19:04:09 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:32.595 19:04:09 -- setup/devices.sh@50 -- # local mount_point= 00:03:32.595 19:04:09 -- setup/devices.sh@51 -- # local test_file= 00:03:32.595 19:04:09 -- setup/devices.sh@53 -- # local found=0 00:03:32.595 19:04:09 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:32.595 19:04:09 -- setup/devices.sh@59 -- # local pci status 00:03:32.595 19:04:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.595 19:04:09 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:32.595 19:04:09 -- setup/devices.sh@47 -- # setup output config 00:03:32.595 19:04:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.595 19:04:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:32.854 19:04:10 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:32.854 19:04:10 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:32.854 19:04:10 -- setup/devices.sh@63 -- # found=1 00:03:32.854 19:04:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.854 19:04:10 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:32.854 19:04:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.113 19:04:10 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:33.113 19:04:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.372 19:04:10 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:33.372 19:04:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.372 19:04:10 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:33.372 19:04:10 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:33.372 19:04:10 -- setup/devices.sh@68 -- # return 0 00:03:33.372 19:04:10 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:33.372 19:04:10 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:33.372 19:04:10 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:33.372 19:04:10 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:33.372 19:04:10 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:33.372 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:33.372 00:03:33.372 real 0m4.670s 00:03:33.372 user 0m1.056s 00:03:33.372 sys 0m1.304s 00:03:33.372 19:04:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:33.372 ************************************ 00:03:33.372 END TEST nvme_mount 00:03:33.372 ************************************ 00:03:33.372 19:04:10 -- common/autotest_common.sh@10 -- # set +x 00:03:33.372 19:04:10 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:33.372 19:04:10 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:33.372 19:04:10 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:33.372 19:04:10 -- common/autotest_common.sh@10 -- # set +x 00:03:33.372 ************************************ 00:03:33.372 START TEST dm_mount 00:03:33.372 ************************************ 00:03:33.372 19:04:10 -- common/autotest_common.sh@1102 -- # dm_mount 00:03:33.372 19:04:10 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:33.372 19:04:10 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:33.372 19:04:10 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:33.372 19:04:10 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:33.372 19:04:10 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:33.372 19:04:10 -- setup/common.sh@40 -- # local part_no=2 00:03:33.372 19:04:10 -- setup/common.sh@41 -- # local size=1073741824 00:03:33.372 19:04:10 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:33.372 19:04:10 -- setup/common.sh@44 -- # parts=() 00:03:33.372 19:04:10 -- setup/common.sh@44 -- # local parts 00:03:33.372 19:04:10 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:33.372 19:04:10 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:33.372 19:04:10 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:33.372 19:04:10 -- setup/common.sh@46 -- # (( part++ )) 00:03:33.372 19:04:10 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:33.372 19:04:10 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:33.372 19:04:10 -- setup/common.sh@46 -- # (( part++ )) 00:03:33.372 19:04:10 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:33.372 19:04:10 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:33.372 19:04:10 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:33.372 19:04:10 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:34.750 Creating new GPT entries in memory. 00:03:34.750 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:34.750 other utilities. 00:03:34.750 19:04:11 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:34.750 19:04:11 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:34.750 19:04:11 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:34.750 19:04:11 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:34.750 19:04:11 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:35.686 Creating new GPT entries in memory. 00:03:35.686 The operation has completed successfully. 00:03:35.686 19:04:12 -- setup/common.sh@57 -- # (( part++ )) 00:03:35.686 19:04:12 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:35.686 19:04:12 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:35.686 19:04:12 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:35.686 19:04:12 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:03:36.622 The operation has completed successfully. 00:03:36.622 19:04:13 -- setup/common.sh@57 -- # (( part++ )) 00:03:36.622 19:04:13 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:36.622 19:04:13 -- setup/common.sh@62 -- # wait 54303 00:03:36.622 19:04:13 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:36.622 19:04:13 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:36.622 19:04:13 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:36.622 19:04:13 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:36.622 19:04:13 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:36.622 19:04:13 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:36.622 19:04:13 -- setup/devices.sh@161 -- # break 00:03:36.622 19:04:13 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:36.622 19:04:13 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:36.622 19:04:13 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:36.622 19:04:13 -- setup/devices.sh@166 -- # dm=dm-0 00:03:36.622 19:04:13 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:36.622 19:04:13 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:36.622 19:04:13 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:36.622 19:04:13 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:03:36.622 19:04:13 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:36.622 19:04:13 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:36.622 19:04:13 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:36.622 19:04:13 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:36.622 19:04:13 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:36.622 19:04:13 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:36.622 19:04:13 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:36.622 19:04:13 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:36.622 19:04:13 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:36.622 19:04:13 -- setup/devices.sh@53 -- # local found=0 00:03:36.622 19:04:13 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:36.622 19:04:13 -- setup/devices.sh@56 -- # : 00:03:36.622 19:04:13 -- setup/devices.sh@59 -- # local pci status 00:03:36.622 19:04:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.622 19:04:13 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:36.622 19:04:13 -- setup/devices.sh@47 -- # setup output config 00:03:36.622 19:04:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.622 19:04:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:36.881 19:04:14 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:36.881 19:04:14 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:36.881 19:04:14 -- setup/devices.sh@63 -- # found=1 00:03:36.881 19:04:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.881 19:04:14 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:36.881 19:04:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.139 19:04:14 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:37.139 19:04:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.398 19:04:14 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:37.398 19:04:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.398 19:04:14 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:37.398 19:04:14 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:03:37.398 19:04:14 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:37.398 19:04:14 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:37.398 19:04:14 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:37.398 19:04:14 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:37.398 19:04:14 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:37.398 19:04:14 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:37.398 19:04:14 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:37.398 19:04:14 -- setup/devices.sh@50 -- # local mount_point= 00:03:37.398 19:04:14 -- setup/devices.sh@51 -- # local test_file= 00:03:37.398 19:04:14 -- setup/devices.sh@53 -- # local found=0 00:03:37.398 19:04:14 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:37.398 19:04:14 -- setup/devices.sh@59 -- # local pci status 00:03:37.398 19:04:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.398 19:04:14 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:37.398 19:04:14 -- setup/devices.sh@47 -- # setup output config 00:03:37.398 19:04:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.398 19:04:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:37.657 19:04:14 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:37.657 19:04:14 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:37.657 19:04:14 -- setup/devices.sh@63 -- # found=1 00:03:37.657 19:04:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.657 19:04:14 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:37.657 19:04:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.939 19:04:15 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:37.939 19:04:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.939 19:04:15 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:37.939 19:04:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.208 19:04:15 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:38.208 19:04:15 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:38.208 19:04:15 -- setup/devices.sh@68 -- # return 0 00:03:38.208 19:04:15 -- setup/devices.sh@187 -- # cleanup_dm 00:03:38.208 19:04:15 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:38.208 19:04:15 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:38.208 19:04:15 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:38.208 19:04:15 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:38.208 19:04:15 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:38.208 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:38.208 19:04:15 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:38.208 19:04:15 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:38.208 00:03:38.208 real 0m4.651s 00:03:38.208 user 0m0.685s 00:03:38.208 sys 0m0.900s 00:03:38.208 19:04:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:38.208 19:04:15 -- common/autotest_common.sh@10 -- # set +x 00:03:38.208 ************************************ 00:03:38.208 END TEST dm_mount 00:03:38.208 ************************************ 00:03:38.208 19:04:15 -- setup/devices.sh@1 -- # cleanup 00:03:38.208 19:04:15 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:38.208 19:04:15 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:38.208 19:04:15 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:38.208 19:04:15 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:38.208 19:04:15 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:38.208 19:04:15 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:38.467 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:38.467 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:38.467 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:38.467 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:38.467 19:04:15 -- setup/devices.sh@12 -- # cleanup_dm 00:03:38.467 19:04:15 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:38.467 19:04:15 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:38.467 19:04:15 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:38.467 19:04:15 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:38.467 19:04:15 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:38.467 19:04:15 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:38.467 00:03:38.467 real 0m10.915s 00:03:38.467 user 0m2.410s 00:03:38.467 sys 0m2.844s 00:03:38.467 19:04:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:38.467 19:04:15 -- common/autotest_common.sh@10 -- # set +x 00:03:38.467 ************************************ 00:03:38.467 END TEST devices 00:03:38.467 ************************************ 00:03:38.467 00:03:38.467 real 0m22.513s 00:03:38.467 user 0m7.414s 00:03:38.467 sys 0m9.588s 00:03:38.467 19:04:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:38.467 19:04:15 -- common/autotest_common.sh@10 -- # set +x 00:03:38.467 ************************************ 00:03:38.467 END TEST setup.sh 00:03:38.467 ************************************ 00:03:38.467 19:04:15 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:38.725 Hugepages 00:03:38.725 node hugesize free / total 00:03:38.725 node0 1048576kB 0 / 0 00:03:38.725 node0 2048kB 2048 / 2048 00:03:38.725 00:03:38.725 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:38.725 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:38.725 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:38.984 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:38.984 19:04:16 -- spdk/autotest.sh@141 -- # uname -s 00:03:38.984 19:04:16 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:03:38.984 19:04:16 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:03:38.984 19:04:16 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:39.551 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:39.810 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:39.810 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:03:39.810 19:04:17 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:40.746 19:04:18 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:40.746 19:04:18 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:40.746 19:04:18 -- common/autotest_common.sh@1517 -- # bdfs=($(get_nvme_bdfs)) 00:03:40.746 19:04:18 -- common/autotest_common.sh@1517 -- # get_nvme_bdfs 00:03:40.746 19:04:18 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:40.746 19:04:18 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:40.746 19:04:18 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:40.746 19:04:18 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:40.746 19:04:18 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:41.005 19:04:18 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:03:41.005 19:04:18 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:03:41.005 19:04:18 -- common/autotest_common.sh@1519 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:41.264 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:41.264 Waiting for block devices as requested 00:03:41.264 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:03:41.264 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:03:41.524 19:04:18 -- common/autotest_common.sh@1521 -- # for bdf in "${bdfs[@]}" 00:03:41.524 19:04:18 -- common/autotest_common.sh@1522 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:03:41.524 19:04:18 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:41.524 19:04:18 -- common/autotest_common.sh@1485 -- # grep 0000:00:06.0/nvme/nvme 00:03:41.524 19:04:18 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:03:41.524 19:04:18 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:03:41.524 19:04:18 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:03:41.524 19:04:18 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:41.524 19:04:18 -- common/autotest_common.sh@1522 -- # nvme_ctrlr=/dev/nvme0 00:03:41.524 19:04:18 -- common/autotest_common.sh@1523 -- # [[ -z /dev/nvme0 ]] 00:03:41.524 19:04:18 -- common/autotest_common.sh@1528 -- # grep oacs 00:03:41.524 19:04:18 -- common/autotest_common.sh@1528 -- # nvme id-ctrl /dev/nvme0 00:03:41.524 19:04:18 -- common/autotest_common.sh@1528 -- # cut -d: -f2 00:03:41.524 19:04:18 -- common/autotest_common.sh@1528 -- # oacs=' 0x12a' 00:03:41.524 19:04:18 -- common/autotest_common.sh@1529 -- # oacs_ns_manage=8 00:03:41.524 19:04:18 -- common/autotest_common.sh@1531 -- # [[ 8 -ne 0 ]] 00:03:41.524 19:04:18 -- common/autotest_common.sh@1537 -- # nvme id-ctrl /dev/nvme0 00:03:41.524 19:04:18 -- common/autotest_common.sh@1537 -- # grep unvmcap 00:03:41.524 19:04:18 -- common/autotest_common.sh@1537 -- # cut -d: -f2 00:03:41.524 19:04:18 -- common/autotest_common.sh@1537 -- # unvmcap=' 0' 00:03:41.524 19:04:18 -- common/autotest_common.sh@1538 -- # [[ 0 -eq 0 ]] 00:03:41.524 19:04:18 -- common/autotest_common.sh@1540 -- # continue 00:03:41.524 19:04:18 -- common/autotest_common.sh@1521 -- # for bdf in "${bdfs[@]}" 00:03:41.524 19:04:18 -- common/autotest_common.sh@1522 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:03:41.524 19:04:18 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:41.524 19:04:18 -- common/autotest_common.sh@1485 -- # grep 0000:00:07.0/nvme/nvme 00:03:41.524 19:04:18 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:03:41.524 19:04:18 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:03:41.524 19:04:18 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:03:41.524 19:04:18 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:03:41.524 19:04:18 -- common/autotest_common.sh@1522 -- # nvme_ctrlr=/dev/nvme1 00:03:41.524 19:04:18 -- common/autotest_common.sh@1523 -- # [[ -z /dev/nvme1 ]] 00:03:41.524 19:04:18 -- common/autotest_common.sh@1528 -- # nvme id-ctrl /dev/nvme1 00:03:41.524 19:04:18 -- common/autotest_common.sh@1528 -- # grep oacs 00:03:41.524 19:04:18 -- common/autotest_common.sh@1528 -- # cut -d: -f2 00:03:41.524 19:04:18 -- common/autotest_common.sh@1528 -- # oacs=' 0x12a' 00:03:41.524 19:04:18 -- common/autotest_common.sh@1529 -- # oacs_ns_manage=8 00:03:41.524 19:04:18 -- common/autotest_common.sh@1531 -- # [[ 8 -ne 0 ]] 00:03:41.524 19:04:18 -- common/autotest_common.sh@1537 -- # grep unvmcap 00:03:41.524 19:04:18 -- common/autotest_common.sh@1537 -- # nvme id-ctrl /dev/nvme1 00:03:41.524 19:04:18 -- common/autotest_common.sh@1537 -- # cut -d: -f2 00:03:41.524 19:04:18 -- common/autotest_common.sh@1537 -- # unvmcap=' 0' 00:03:41.524 19:04:18 -- common/autotest_common.sh@1538 -- # [[ 0 -eq 0 ]] 00:03:41.524 19:04:18 -- common/autotest_common.sh@1540 -- # continue 00:03:41.524 19:04:18 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:03:41.524 19:04:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:41.524 19:04:18 -- common/autotest_common.sh@10 -- # set +x 00:03:41.524 19:04:18 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:03:41.524 19:04:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:41.524 19:04:18 -- common/autotest_common.sh@10 -- # set +x 00:03:41.524 19:04:18 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:42.091 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:42.349 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:42.349 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:03:42.349 19:04:19 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:03:42.349 19:04:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:42.349 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:42.608 19:04:19 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:03:42.608 19:04:19 -- common/autotest_common.sh@1574 -- # mapfile -t bdfs 00:03:42.608 19:04:19 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs_by_id 0x0a54 00:03:42.608 19:04:19 -- common/autotest_common.sh@1560 -- # bdfs=() 00:03:42.608 19:04:19 -- common/autotest_common.sh@1560 -- # local bdfs 00:03:42.608 19:04:19 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:42.608 19:04:19 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:42.608 19:04:19 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:42.608 19:04:19 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:42.608 19:04:19 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:42.608 19:04:19 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:42.608 19:04:19 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:03:42.608 19:04:19 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:03:42.608 19:04:19 -- common/autotest_common.sh@1562 -- # for bdf in $(get_nvme_bdfs) 00:03:42.608 19:04:19 -- common/autotest_common.sh@1563 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:03:42.608 19:04:19 -- common/autotest_common.sh@1563 -- # device=0x0010 00:03:42.608 19:04:19 -- common/autotest_common.sh@1564 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:42.608 19:04:19 -- common/autotest_common.sh@1562 -- # for bdf in $(get_nvme_bdfs) 00:03:42.608 19:04:19 -- common/autotest_common.sh@1563 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:03:42.608 19:04:19 -- common/autotest_common.sh@1563 -- # device=0x0010 00:03:42.608 19:04:19 -- common/autotest_common.sh@1564 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:42.608 19:04:19 -- common/autotest_common.sh@1569 -- # printf '%s\n' 00:03:42.608 19:04:19 -- common/autotest_common.sh@1575 -- # [[ -z '' ]] 00:03:42.608 19:04:19 -- common/autotest_common.sh@1576 -- # return 0 00:03:42.608 19:04:19 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:03:42.608 19:04:19 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:03:42.608 19:04:19 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:03:42.608 19:04:19 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:03:42.608 19:04:19 -- spdk/autotest.sh@173 -- # timing_enter lib 00:03:42.608 19:04:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:42.608 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:42.608 19:04:19 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:42.608 19:04:19 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:42.608 19:04:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:42.608 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:42.608 ************************************ 00:03:42.608 START TEST env 00:03:42.608 ************************************ 00:03:42.608 19:04:19 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:42.608 * Looking for test storage... 00:03:42.608 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:42.608 19:04:19 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:42.608 19:04:19 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:42.608 19:04:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:42.608 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:42.608 ************************************ 00:03:42.608 START TEST env_memory 00:03:42.608 ************************************ 00:03:42.608 19:04:19 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:42.609 00:03:42.609 00:03:42.609 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.609 http://cunit.sourceforge.net/ 00:03:42.609 00:03:42.609 00:03:42.609 Suite: memory 00:03:42.609 Test: alloc and free memory map ...[2024-02-14 19:04:20.007719] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:42.609 passed 00:03:42.867 Test: mem map translation ...[2024-02-14 19:04:20.038673] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:42.867 [2024-02-14 19:04:20.038728] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:42.867 [2024-02-14 19:04:20.038788] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:42.867 [2024-02-14 19:04:20.038800] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:42.867 passed 00:03:42.867 Test: mem map registration ...[2024-02-14 19:04:20.102827] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:42.867 [2024-02-14 19:04:20.102883] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:42.867 passed 00:03:42.867 Test: mem map adjacent registrations ...passed 00:03:42.867 00:03:42.867 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.867 suites 1 1 n/a 0 0 00:03:42.867 tests 4 4 4 0 0 00:03:42.868 asserts 152 152 152 0 n/a 00:03:42.868 00:03:42.868 Elapsed time = 0.213 seconds 00:03:42.868 00:03:42.868 real 0m0.229s 00:03:42.868 user 0m0.213s 00:03:42.868 sys 0m0.015s 00:03:42.868 19:04:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:42.868 19:04:20 -- common/autotest_common.sh@10 -- # set +x 00:03:42.868 ************************************ 00:03:42.868 END TEST env_memory 00:03:42.868 ************************************ 00:03:42.868 19:04:20 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:42.868 19:04:20 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:42.868 19:04:20 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:42.868 19:04:20 -- common/autotest_common.sh@10 -- # set +x 00:03:42.868 ************************************ 00:03:42.868 START TEST env_vtophys 00:03:42.868 ************************************ 00:03:42.868 19:04:20 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:42.868 EAL: lib.eal log level changed from notice to debug 00:03:42.868 EAL: Detected lcore 0 as core 0 on socket 0 00:03:42.868 EAL: Detected lcore 1 as core 0 on socket 0 00:03:42.868 EAL: Detected lcore 2 as core 0 on socket 0 00:03:42.868 EAL: Detected lcore 3 as core 0 on socket 0 00:03:42.868 EAL: Detected lcore 4 as core 0 on socket 0 00:03:42.868 EAL: Detected lcore 5 as core 0 on socket 0 00:03:42.868 EAL: Detected lcore 6 as core 0 on socket 0 00:03:42.868 EAL: Detected lcore 7 as core 0 on socket 0 00:03:42.868 EAL: Detected lcore 8 as core 0 on socket 0 00:03:42.868 EAL: Detected lcore 9 as core 0 on socket 0 00:03:42.868 EAL: Maximum logical cores by configuration: 128 00:03:42.868 EAL: Detected CPU lcores: 10 00:03:42.868 EAL: Detected NUMA nodes: 1 00:03:42.868 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:42.868 EAL: Detected shared linkage of DPDK 00:03:42.868 EAL: No shared files mode enabled, IPC will be disabled 00:03:42.868 EAL: Selected IOVA mode 'PA' 00:03:42.868 EAL: Probing VFIO support... 00:03:42.868 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:42.868 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:42.868 EAL: Ask a virtual area of 0x2e000 bytes 00:03:42.868 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:42.868 EAL: Setting up physically contiguous memory... 00:03:42.868 EAL: Setting maximum number of open files to 524288 00:03:42.868 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:42.868 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:42.868 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.868 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:42.868 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:42.868 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.868 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:42.868 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:42.868 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.868 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:42.868 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:42.868 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.868 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:42.868 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:42.868 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.868 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:42.868 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:42.868 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.868 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:42.868 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:42.868 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.868 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:42.868 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:42.868 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.868 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:42.868 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:42.868 EAL: Hugepages will be freed exactly as allocated. 00:03:42.868 EAL: No shared files mode enabled, IPC is disabled 00:03:42.868 EAL: No shared files mode enabled, IPC is disabled 00:03:43.130 EAL: TSC frequency is ~2200000 KHz 00:03:43.130 EAL: Main lcore 0 is ready (tid=7f8ad4b03a00;cpuset=[0]) 00:03:43.130 EAL: Trying to obtain current memory policy. 00:03:43.130 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.130 EAL: Restoring previous memory policy: 0 00:03:43.130 EAL: request: mp_malloc_sync 00:03:43.130 EAL: No shared files mode enabled, IPC is disabled 00:03:43.130 EAL: Heap on socket 0 was expanded by 2MB 00:03:43.130 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:43.130 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:43.130 EAL: Mem event callback 'spdk:(nil)' registered 00:03:43.130 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:43.130 00:03:43.130 00:03:43.130 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.130 http://cunit.sourceforge.net/ 00:03:43.130 00:03:43.130 00:03:43.130 Suite: components_suite 00:03:43.130 Test: vtophys_malloc_test ...passed 00:03:43.130 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:43.130 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.130 EAL: Restoring previous memory policy: 4 00:03:43.130 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.130 EAL: request: mp_malloc_sync 00:03:43.130 EAL: No shared files mode enabled, IPC is disabled 00:03:43.130 EAL: Heap on socket 0 was expanded by 4MB 00:03:43.130 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.130 EAL: request: mp_malloc_sync 00:03:43.130 EAL: No shared files mode enabled, IPC is disabled 00:03:43.130 EAL: Heap on socket 0 was shrunk by 4MB 00:03:43.130 EAL: Trying to obtain current memory policy. 00:03:43.130 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.130 EAL: Restoring previous memory policy: 4 00:03:43.130 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.130 EAL: request: mp_malloc_sync 00:03:43.130 EAL: No shared files mode enabled, IPC is disabled 00:03:43.130 EAL: Heap on socket 0 was expanded by 6MB 00:03:43.130 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.130 EAL: request: mp_malloc_sync 00:03:43.130 EAL: No shared files mode enabled, IPC is disabled 00:03:43.130 EAL: Heap on socket 0 was shrunk by 6MB 00:03:43.130 EAL: Trying to obtain current memory policy. 00:03:43.130 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.130 EAL: Restoring previous memory policy: 4 00:03:43.130 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.130 EAL: request: mp_malloc_sync 00:03:43.130 EAL: No shared files mode enabled, IPC is disabled 00:03:43.130 EAL: Heap on socket 0 was expanded by 10MB 00:03:43.130 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.130 EAL: request: mp_malloc_sync 00:03:43.130 EAL: No shared files mode enabled, IPC is disabled 00:03:43.130 EAL: Heap on socket 0 was shrunk by 10MB 00:03:43.130 EAL: Trying to obtain current memory policy. 00:03:43.130 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.130 EAL: Restoring previous memory policy: 4 00:03:43.130 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.130 EAL: request: mp_malloc_sync 00:03:43.130 EAL: No shared files mode enabled, IPC is disabled 00:03:43.130 EAL: Heap on socket 0 was expanded by 18MB 00:03:43.130 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.130 EAL: request: mp_malloc_sync 00:03:43.130 EAL: No shared files mode enabled, IPC is disabled 00:03:43.130 EAL: Heap on socket 0 was shrunk by 18MB 00:03:43.130 EAL: Trying to obtain current memory policy. 00:03:43.130 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.130 EAL: Restoring previous memory policy: 4 00:03:43.130 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.130 EAL: request: mp_malloc_sync 00:03:43.130 EAL: No shared files mode enabled, IPC is disabled 00:03:43.130 EAL: Heap on socket 0 was expanded by 34MB 00:03:43.130 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.130 EAL: request: mp_malloc_sync 00:03:43.130 EAL: No shared files mode enabled, IPC is disabled 00:03:43.130 EAL: Heap on socket 0 was shrunk by 34MB 00:03:43.130 EAL: Trying to obtain current memory policy. 00:03:43.130 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.130 EAL: Restoring previous memory policy: 4 00:03:43.130 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.130 EAL: request: mp_malloc_sync 00:03:43.130 EAL: No shared files mode enabled, IPC is disabled 00:03:43.130 EAL: Heap on socket 0 was expanded by 66MB 00:03:43.130 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.130 EAL: request: mp_malloc_sync 00:03:43.130 EAL: No shared files mode enabled, IPC is disabled 00:03:43.130 EAL: Heap on socket 0 was shrunk by 66MB 00:03:43.130 EAL: Trying to obtain current memory policy. 00:03:43.130 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.130 EAL: Restoring previous memory policy: 4 00:03:43.130 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.130 EAL: request: mp_malloc_sync 00:03:43.130 EAL: No shared files mode enabled, IPC is disabled 00:03:43.130 EAL: Heap on socket 0 was expanded by 130MB 00:03:43.130 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.130 EAL: request: mp_malloc_sync 00:03:43.130 EAL: No shared files mode enabled, IPC is disabled 00:03:43.130 EAL: Heap on socket 0 was shrunk by 130MB 00:03:43.130 EAL: Trying to obtain current memory policy. 00:03:43.130 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.388 EAL: Restoring previous memory policy: 4 00:03:43.388 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.388 EAL: request: mp_malloc_sync 00:03:43.388 EAL: No shared files mode enabled, IPC is disabled 00:03:43.388 EAL: Heap on socket 0 was expanded by 258MB 00:03:43.388 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.388 EAL: request: mp_malloc_sync 00:03:43.388 EAL: No shared files mode enabled, IPC is disabled 00:03:43.388 EAL: Heap on socket 0 was shrunk by 258MB 00:03:43.388 EAL: Trying to obtain current memory policy. 00:03:43.388 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.646 EAL: Restoring previous memory policy: 4 00:03:43.646 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.646 EAL: request: mp_malloc_sync 00:03:43.646 EAL: No shared files mode enabled, IPC is disabled 00:03:43.646 EAL: Heap on socket 0 was expanded by 514MB 00:03:43.646 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.646 EAL: request: mp_malloc_sync 00:03:43.646 EAL: No shared files mode enabled, IPC is disabled 00:03:43.646 EAL: Heap on socket 0 was shrunk by 514MB 00:03:43.646 EAL: Trying to obtain current memory policy. 00:03:43.646 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.212 EAL: Restoring previous memory policy: 4 00:03:44.212 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.212 EAL: request: mp_malloc_sync 00:03:44.212 EAL: No shared files mode enabled, IPC is disabled 00:03:44.212 EAL: Heap on socket 0 was expanded by 1026MB 00:03:44.212 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.470 passed 00:03:44.470 00:03:44.470 Run Summary: Type Total Ran Passed Failed Inactive 00:03:44.470 suites 1 1 n/a 0 0 00:03:44.470 tests 2 2 2 0 0 00:03:44.470 asserts 5358 5358 5358 0 n/a 00:03:44.470 00:03:44.470 Elapsed time = 1.286 seconds 00:03:44.470 EAL: request: mp_malloc_sync 00:03:44.470 EAL: No shared files mode enabled, IPC is disabled 00:03:44.470 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:44.470 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.470 EAL: request: mp_malloc_sync 00:03:44.470 EAL: No shared files mode enabled, IPC is disabled 00:03:44.470 EAL: Heap on socket 0 was shrunk by 2MB 00:03:44.470 EAL: No shared files mode enabled, IPC is disabled 00:03:44.470 EAL: No shared files mode enabled, IPC is disabled 00:03:44.470 EAL: No shared files mode enabled, IPC is disabled 00:03:44.470 00:03:44.470 real 0m1.482s 00:03:44.470 user 0m0.800s 00:03:44.471 sys 0m0.547s 00:03:44.471 19:04:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:44.471 19:04:21 -- common/autotest_common.sh@10 -- # set +x 00:03:44.471 ************************************ 00:03:44.471 END TEST env_vtophys 00:03:44.471 ************************************ 00:03:44.471 19:04:21 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:44.471 19:04:21 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:44.471 19:04:21 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:44.471 19:04:21 -- common/autotest_common.sh@10 -- # set +x 00:03:44.471 ************************************ 00:03:44.471 START TEST env_pci 00:03:44.471 ************************************ 00:03:44.471 19:04:21 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:44.471 00:03:44.471 00:03:44.471 CUnit - A unit testing framework for C - Version 2.1-3 00:03:44.471 http://cunit.sourceforge.net/ 00:03:44.471 00:03:44.471 00:03:44.471 Suite: pci 00:03:44.471 Test: pci_hook ...[2024-02-14 19:04:21.795616] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 55480 has claimed it 00:03:44.471 passed 00:03:44.471 00:03:44.471 Run Summary: Type Total Ran Passed Failed Inactive 00:03:44.471 suites 1 1 n/a 0 0 00:03:44.471 tests 1 1 1 0 0 00:03:44.471 asserts 25 25 25 0 n/a 00:03:44.471 00:03:44.471 Elapsed time = 0.002 seconds 00:03:44.471 EAL: Cannot find device (10000:00:01.0) 00:03:44.471 EAL: Failed to attach device on primary process 00:03:44.471 00:03:44.471 real 0m0.018s 00:03:44.471 user 0m0.009s 00:03:44.471 sys 0m0.009s 00:03:44.471 19:04:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:44.471 19:04:21 -- common/autotest_common.sh@10 -- # set +x 00:03:44.471 ************************************ 00:03:44.471 END TEST env_pci 00:03:44.471 ************************************ 00:03:44.471 19:04:21 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:44.471 19:04:21 -- env/env.sh@15 -- # uname 00:03:44.471 19:04:21 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:44.471 19:04:21 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:44.471 19:04:21 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:44.471 19:04:21 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:03:44.471 19:04:21 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:44.471 19:04:21 -- common/autotest_common.sh@10 -- # set +x 00:03:44.471 ************************************ 00:03:44.471 START TEST env_dpdk_post_init 00:03:44.471 ************************************ 00:03:44.471 19:04:21 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:44.729 EAL: Detected CPU lcores: 10 00:03:44.729 EAL: Detected NUMA nodes: 1 00:03:44.729 EAL: Detected shared linkage of DPDK 00:03:44.729 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:44.729 EAL: Selected IOVA mode 'PA' 00:03:44.729 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:44.729 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:03:44.729 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:03:44.729 Starting DPDK initialization... 00:03:44.729 Starting SPDK post initialization... 00:03:44.729 SPDK NVMe probe 00:03:44.729 Attaching to 0000:00:06.0 00:03:44.729 Attaching to 0000:00:07.0 00:03:44.729 Attached to 0000:00:06.0 00:03:44.729 Attached to 0000:00:07.0 00:03:44.729 Cleaning up... 00:03:44.729 00:03:44.729 real 0m0.176s 00:03:44.729 user 0m0.038s 00:03:44.729 sys 0m0.039s 00:03:44.729 19:04:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:44.729 19:04:22 -- common/autotest_common.sh@10 -- # set +x 00:03:44.729 ************************************ 00:03:44.729 END TEST env_dpdk_post_init 00:03:44.729 ************************************ 00:03:44.729 19:04:22 -- env/env.sh@26 -- # uname 00:03:44.729 19:04:22 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:44.729 19:04:22 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:44.729 19:04:22 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:44.729 19:04:22 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:44.729 19:04:22 -- common/autotest_common.sh@10 -- # set +x 00:03:44.729 ************************************ 00:03:44.729 START TEST env_mem_callbacks 00:03:44.729 ************************************ 00:03:44.729 19:04:22 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:44.729 EAL: Detected CPU lcores: 10 00:03:44.729 EAL: Detected NUMA nodes: 1 00:03:44.729 EAL: Detected shared linkage of DPDK 00:03:44.729 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:44.729 EAL: Selected IOVA mode 'PA' 00:03:44.987 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:44.987 00:03:44.987 00:03:44.987 CUnit - A unit testing framework for C - Version 2.1-3 00:03:44.987 http://cunit.sourceforge.net/ 00:03:44.987 00:03:44.987 00:03:44.987 Suite: memory 00:03:44.987 Test: test ... 00:03:44.987 register 0x200000200000 2097152 00:03:44.987 malloc 3145728 00:03:44.987 register 0x200000400000 4194304 00:03:44.987 buf 0x200000500000 len 3145728 PASSED 00:03:44.987 malloc 64 00:03:44.987 buf 0x2000004fff40 len 64 PASSED 00:03:44.987 malloc 4194304 00:03:44.987 register 0x200000800000 6291456 00:03:44.987 buf 0x200000a00000 len 4194304 PASSED 00:03:44.987 free 0x200000500000 3145728 00:03:44.987 free 0x2000004fff40 64 00:03:44.987 unregister 0x200000400000 4194304 PASSED 00:03:44.987 free 0x200000a00000 4194304 00:03:44.987 unregister 0x200000800000 6291456 PASSED 00:03:44.987 malloc 8388608 00:03:44.987 register 0x200000400000 10485760 00:03:44.987 buf 0x200000600000 len 8388608 PASSED 00:03:44.987 free 0x200000600000 8388608 00:03:44.987 unregister 0x200000400000 10485760 PASSED 00:03:44.987 passed 00:03:44.987 00:03:44.987 Run Summary: Type Total Ran Passed Failed Inactive 00:03:44.987 suites 1 1 n/a 0 0 00:03:44.987 tests 1 1 1 0 0 00:03:44.987 asserts 15 15 15 0 n/a 00:03:44.987 00:03:44.987 Elapsed time = 0.010 seconds 00:03:44.987 00:03:44.987 real 0m0.145s 00:03:44.987 user 0m0.015s 00:03:44.987 sys 0m0.029s 00:03:44.987 19:04:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:44.987 19:04:22 -- common/autotest_common.sh@10 -- # set +x 00:03:44.987 ************************************ 00:03:44.987 END TEST env_mem_callbacks 00:03:44.987 ************************************ 00:03:44.987 00:03:44.987 real 0m2.406s 00:03:44.987 user 0m1.195s 00:03:44.987 sys 0m0.858s 00:03:44.987 19:04:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:44.988 19:04:22 -- common/autotest_common.sh@10 -- # set +x 00:03:44.988 ************************************ 00:03:44.988 END TEST env 00:03:44.988 ************************************ 00:03:44.988 19:04:22 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:44.988 19:04:22 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:44.988 19:04:22 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:44.988 19:04:22 -- common/autotest_common.sh@10 -- # set +x 00:03:44.988 ************************************ 00:03:44.988 START TEST rpc 00:03:44.988 ************************************ 00:03:44.988 19:04:22 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:45.246 * Looking for test storage... 00:03:45.246 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:45.246 19:04:22 -- rpc/rpc.sh@65 -- # spdk_pid=55590 00:03:45.246 19:04:22 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:45.246 19:04:22 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:45.246 19:04:22 -- rpc/rpc.sh@67 -- # waitforlisten 55590 00:03:45.246 19:04:22 -- common/autotest_common.sh@817 -- # '[' -z 55590 ']' 00:03:45.246 19:04:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:45.246 19:04:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:45.246 19:04:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:45.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:45.246 19:04:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:45.246 19:04:22 -- common/autotest_common.sh@10 -- # set +x 00:03:45.246 [2024-02-14 19:04:22.486281] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:03:45.246 [2024-02-14 19:04:22.486406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55590 ] 00:03:45.246 [2024-02-14 19:04:22.625045] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:45.505 [2024-02-14 19:04:22.750624] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:45.505 [2024-02-14 19:04:22.750793] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:45.505 [2024-02-14 19:04:22.750809] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 55590' to capture a snapshot of events at runtime. 00:03:45.505 [2024-02-14 19:04:22.750819] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid55590 for offline analysis/debug. 00:03:45.505 [2024-02-14 19:04:22.750848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:46.439 19:04:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:46.439 19:04:23 -- common/autotest_common.sh@850 -- # return 0 00:03:46.439 19:04:23 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:46.439 19:04:23 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:46.439 19:04:23 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:46.439 19:04:23 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:46.439 19:04:23 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:46.439 19:04:23 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:46.439 19:04:23 -- common/autotest_common.sh@10 -- # set +x 00:03:46.439 ************************************ 00:03:46.439 START TEST rpc_integrity 00:03:46.439 ************************************ 00:03:46.439 19:04:23 -- common/autotest_common.sh@1102 -- # rpc_integrity 00:03:46.439 19:04:23 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:46.439 19:04:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:46.439 19:04:23 -- common/autotest_common.sh@10 -- # set +x 00:03:46.439 19:04:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:46.439 19:04:23 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:46.440 19:04:23 -- rpc/rpc.sh@13 -- # jq length 00:03:46.440 19:04:23 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:46.440 19:04:23 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:46.440 19:04:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:46.440 19:04:23 -- common/autotest_common.sh@10 -- # set +x 00:03:46.440 19:04:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:46.440 19:04:23 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:46.440 19:04:23 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:46.440 19:04:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:46.440 19:04:23 -- common/autotest_common.sh@10 -- # set +x 00:03:46.440 19:04:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:46.440 19:04:23 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:46.440 { 00:03:46.440 "aliases": [ 00:03:46.440 "ce564d5e-86e6-426f-9c59-d9bb8de965b5" 00:03:46.440 ], 00:03:46.440 "assigned_rate_limits": { 00:03:46.440 "r_mbytes_per_sec": 0, 00:03:46.440 "rw_ios_per_sec": 0, 00:03:46.440 "rw_mbytes_per_sec": 0, 00:03:46.440 "w_mbytes_per_sec": 0 00:03:46.440 }, 00:03:46.440 "block_size": 512, 00:03:46.440 "claimed": false, 00:03:46.440 "driver_specific": {}, 00:03:46.440 "memory_domains": [ 00:03:46.440 { 00:03:46.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.440 "dma_device_type": 2 00:03:46.440 } 00:03:46.440 ], 00:03:46.440 "name": "Malloc0", 00:03:46.440 "num_blocks": 16384, 00:03:46.440 "product_name": "Malloc disk", 00:03:46.440 "supported_io_types": { 00:03:46.440 "abort": true, 00:03:46.440 "compare": false, 00:03:46.440 "compare_and_write": false, 00:03:46.440 "flush": true, 00:03:46.440 "nvme_admin": false, 00:03:46.440 "nvme_io": false, 00:03:46.440 "read": true, 00:03:46.440 "reset": true, 00:03:46.440 "unmap": true, 00:03:46.440 "write": true, 00:03:46.440 "write_zeroes": true 00:03:46.440 }, 00:03:46.440 "uuid": "ce564d5e-86e6-426f-9c59-d9bb8de965b5", 00:03:46.440 "zoned": false 00:03:46.440 } 00:03:46.440 ]' 00:03:46.440 19:04:23 -- rpc/rpc.sh@17 -- # jq length 00:03:46.440 19:04:23 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:46.440 19:04:23 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:46.440 19:04:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:46.440 19:04:23 -- common/autotest_common.sh@10 -- # set +x 00:03:46.440 [2024-02-14 19:04:23.702420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:46.440 [2024-02-14 19:04:23.702485] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:46.440 [2024-02-14 19:04:23.702524] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19c0440 00:03:46.440 [2024-02-14 19:04:23.702534] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:46.440 [2024-02-14 19:04:23.704341] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:46.440 [2024-02-14 19:04:23.704381] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:46.440 Passthru0 00:03:46.440 19:04:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:46.440 19:04:23 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:46.440 19:04:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:46.440 19:04:23 -- common/autotest_common.sh@10 -- # set +x 00:03:46.440 19:04:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:46.440 19:04:23 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:46.440 { 00:03:46.440 "aliases": [ 00:03:46.440 "ce564d5e-86e6-426f-9c59-d9bb8de965b5" 00:03:46.440 ], 00:03:46.440 "assigned_rate_limits": { 00:03:46.440 "r_mbytes_per_sec": 0, 00:03:46.440 "rw_ios_per_sec": 0, 00:03:46.440 "rw_mbytes_per_sec": 0, 00:03:46.440 "w_mbytes_per_sec": 0 00:03:46.440 }, 00:03:46.440 "block_size": 512, 00:03:46.440 "claim_type": "exclusive_write", 00:03:46.440 "claimed": true, 00:03:46.440 "driver_specific": {}, 00:03:46.440 "memory_domains": [ 00:03:46.440 { 00:03:46.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.440 "dma_device_type": 2 00:03:46.440 } 00:03:46.440 ], 00:03:46.440 "name": "Malloc0", 00:03:46.440 "num_blocks": 16384, 00:03:46.440 "product_name": "Malloc disk", 00:03:46.440 "supported_io_types": { 00:03:46.440 "abort": true, 00:03:46.440 "compare": false, 00:03:46.440 "compare_and_write": false, 00:03:46.440 "flush": true, 00:03:46.440 "nvme_admin": false, 00:03:46.440 "nvme_io": false, 00:03:46.440 "read": true, 00:03:46.440 "reset": true, 00:03:46.440 "unmap": true, 00:03:46.440 "write": true, 00:03:46.440 "write_zeroes": true 00:03:46.440 }, 00:03:46.440 "uuid": "ce564d5e-86e6-426f-9c59-d9bb8de965b5", 00:03:46.440 "zoned": false 00:03:46.440 }, 00:03:46.440 { 00:03:46.440 "aliases": [ 00:03:46.440 "91173af0-9e88-56f6-ab93-020467c5d7ac" 00:03:46.440 ], 00:03:46.440 "assigned_rate_limits": { 00:03:46.440 "r_mbytes_per_sec": 0, 00:03:46.440 "rw_ios_per_sec": 0, 00:03:46.440 "rw_mbytes_per_sec": 0, 00:03:46.440 "w_mbytes_per_sec": 0 00:03:46.440 }, 00:03:46.440 "block_size": 512, 00:03:46.440 "claimed": false, 00:03:46.440 "driver_specific": { 00:03:46.440 "passthru": { 00:03:46.440 "base_bdev_name": "Malloc0", 00:03:46.440 "name": "Passthru0" 00:03:46.440 } 00:03:46.440 }, 00:03:46.440 "memory_domains": [ 00:03:46.440 { 00:03:46.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.440 "dma_device_type": 2 00:03:46.440 } 00:03:46.440 ], 00:03:46.440 "name": "Passthru0", 00:03:46.440 "num_blocks": 16384, 00:03:46.440 "product_name": "passthru", 00:03:46.440 "supported_io_types": { 00:03:46.440 "abort": true, 00:03:46.440 "compare": false, 00:03:46.440 "compare_and_write": false, 00:03:46.440 "flush": true, 00:03:46.440 "nvme_admin": false, 00:03:46.440 "nvme_io": false, 00:03:46.440 "read": true, 00:03:46.440 "reset": true, 00:03:46.440 "unmap": true, 00:03:46.440 "write": true, 00:03:46.440 "write_zeroes": true 00:03:46.440 }, 00:03:46.440 "uuid": "91173af0-9e88-56f6-ab93-020467c5d7ac", 00:03:46.440 "zoned": false 00:03:46.440 } 00:03:46.440 ]' 00:03:46.440 19:04:23 -- rpc/rpc.sh@21 -- # jq length 00:03:46.440 19:04:23 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:46.440 19:04:23 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:46.440 19:04:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:46.440 19:04:23 -- common/autotest_common.sh@10 -- # set +x 00:03:46.440 19:04:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:46.440 19:04:23 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:46.440 19:04:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:46.440 19:04:23 -- common/autotest_common.sh@10 -- # set +x 00:03:46.440 19:04:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:46.440 19:04:23 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:46.440 19:04:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:46.440 19:04:23 -- common/autotest_common.sh@10 -- # set +x 00:03:46.440 19:04:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:46.440 19:04:23 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:46.440 19:04:23 -- rpc/rpc.sh@26 -- # jq length 00:03:46.698 ************************************ 00:03:46.698 END TEST rpc_integrity 00:03:46.698 ************************************ 00:03:46.698 19:04:23 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:46.698 00:03:46.698 real 0m0.315s 00:03:46.698 user 0m0.207s 00:03:46.698 sys 0m0.030s 00:03:46.698 19:04:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:46.698 19:04:23 -- common/autotest_common.sh@10 -- # set +x 00:03:46.698 19:04:23 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:46.698 19:04:23 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:46.698 19:04:23 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:46.698 19:04:23 -- common/autotest_common.sh@10 -- # set +x 00:03:46.698 ************************************ 00:03:46.698 START TEST rpc_plugins 00:03:46.698 ************************************ 00:03:46.698 19:04:23 -- common/autotest_common.sh@1102 -- # rpc_plugins 00:03:46.698 19:04:23 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:46.698 19:04:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:46.698 19:04:23 -- common/autotest_common.sh@10 -- # set +x 00:03:46.698 19:04:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:46.698 19:04:23 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:46.698 19:04:23 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:46.698 19:04:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:46.698 19:04:23 -- common/autotest_common.sh@10 -- # set +x 00:03:46.698 19:04:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:46.698 19:04:23 -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:46.698 { 00:03:46.698 "aliases": [ 00:03:46.698 "da4ac5ca-d98b-4ab3-9218-5772879ddac7" 00:03:46.698 ], 00:03:46.698 "assigned_rate_limits": { 00:03:46.698 "r_mbytes_per_sec": 0, 00:03:46.698 "rw_ios_per_sec": 0, 00:03:46.698 "rw_mbytes_per_sec": 0, 00:03:46.698 "w_mbytes_per_sec": 0 00:03:46.698 }, 00:03:46.698 "block_size": 4096, 00:03:46.698 "claimed": false, 00:03:46.698 "driver_specific": {}, 00:03:46.698 "memory_domains": [ 00:03:46.698 { 00:03:46.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.698 "dma_device_type": 2 00:03:46.698 } 00:03:46.698 ], 00:03:46.698 "name": "Malloc1", 00:03:46.698 "num_blocks": 256, 00:03:46.698 "product_name": "Malloc disk", 00:03:46.698 "supported_io_types": { 00:03:46.698 "abort": true, 00:03:46.698 "compare": false, 00:03:46.698 "compare_and_write": false, 00:03:46.698 "flush": true, 00:03:46.698 "nvme_admin": false, 00:03:46.698 "nvme_io": false, 00:03:46.698 "read": true, 00:03:46.698 "reset": true, 00:03:46.698 "unmap": true, 00:03:46.698 "write": true, 00:03:46.698 "write_zeroes": true 00:03:46.698 }, 00:03:46.698 "uuid": "da4ac5ca-d98b-4ab3-9218-5772879ddac7", 00:03:46.698 "zoned": false 00:03:46.698 } 00:03:46.698 ]' 00:03:46.698 19:04:23 -- rpc/rpc.sh@32 -- # jq length 00:03:46.698 19:04:23 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:46.698 19:04:23 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:46.698 19:04:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:46.698 19:04:23 -- common/autotest_common.sh@10 -- # set +x 00:03:46.698 19:04:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:46.698 19:04:24 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:46.698 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:46.698 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:46.698 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:46.698 19:04:24 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:46.698 19:04:24 -- rpc/rpc.sh@36 -- # jq length 00:03:46.698 ************************************ 00:03:46.699 END TEST rpc_plugins 00:03:46.699 ************************************ 00:03:46.699 19:04:24 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:46.699 00:03:46.699 real 0m0.160s 00:03:46.699 user 0m0.107s 00:03:46.699 sys 0m0.016s 00:03:46.699 19:04:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:46.699 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:46.699 19:04:24 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:46.699 19:04:24 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:46.699 19:04:24 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:46.699 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:46.957 ************************************ 00:03:46.957 START TEST rpc_trace_cmd_test 00:03:46.957 ************************************ 00:03:46.957 19:04:24 -- common/autotest_common.sh@1102 -- # rpc_trace_cmd_test 00:03:46.957 19:04:24 -- rpc/rpc.sh@40 -- # local info 00:03:46.957 19:04:24 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:46.957 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:46.957 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:46.957 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:46.957 19:04:24 -- rpc/rpc.sh@42 -- # info='{ 00:03:46.957 "bdev": { 00:03:46.957 "mask": "0x8", 00:03:46.957 "tpoint_mask": "0xffffffffffffffff" 00:03:46.957 }, 00:03:46.957 "bdev_nvme": { 00:03:46.957 "mask": "0x4000", 00:03:46.957 "tpoint_mask": "0x0" 00:03:46.957 }, 00:03:46.957 "blobfs": { 00:03:46.957 "mask": "0x80", 00:03:46.957 "tpoint_mask": "0x0" 00:03:46.957 }, 00:03:46.957 "dsa": { 00:03:46.957 "mask": "0x200", 00:03:46.957 "tpoint_mask": "0x0" 00:03:46.957 }, 00:03:46.957 "ftl": { 00:03:46.957 "mask": "0x40", 00:03:46.957 "tpoint_mask": "0x0" 00:03:46.957 }, 00:03:46.957 "iaa": { 00:03:46.957 "mask": "0x1000", 00:03:46.957 "tpoint_mask": "0x0" 00:03:46.957 }, 00:03:46.957 "iscsi_conn": { 00:03:46.957 "mask": "0x2", 00:03:46.957 "tpoint_mask": "0x0" 00:03:46.957 }, 00:03:46.957 "nvme_pcie": { 00:03:46.957 "mask": "0x800", 00:03:46.957 "tpoint_mask": "0x0" 00:03:46.957 }, 00:03:46.957 "nvme_tcp": { 00:03:46.957 "mask": "0x2000", 00:03:46.957 "tpoint_mask": "0x0" 00:03:46.957 }, 00:03:46.957 "nvmf_rdma": { 00:03:46.957 "mask": "0x10", 00:03:46.957 "tpoint_mask": "0x0" 00:03:46.957 }, 00:03:46.957 "nvmf_tcp": { 00:03:46.957 "mask": "0x20", 00:03:46.957 "tpoint_mask": "0x0" 00:03:46.957 }, 00:03:46.957 "scsi": { 00:03:46.957 "mask": "0x4", 00:03:46.957 "tpoint_mask": "0x0" 00:03:46.957 }, 00:03:46.957 "thread": { 00:03:46.957 "mask": "0x400", 00:03:46.957 "tpoint_mask": "0x0" 00:03:46.957 }, 00:03:46.957 "tpoint_group_mask": "0x8", 00:03:46.957 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid55590" 00:03:46.957 }' 00:03:46.957 19:04:24 -- rpc/rpc.sh@43 -- # jq length 00:03:46.957 19:04:24 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:03:46.957 19:04:24 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:46.957 19:04:24 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:46.957 19:04:24 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:46.957 19:04:24 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:46.957 19:04:24 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:46.957 19:04:24 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:46.957 19:04:24 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:47.215 ************************************ 00:03:47.215 END TEST rpc_trace_cmd_test 00:03:47.215 ************************************ 00:03:47.215 19:04:24 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:47.215 00:03:47.215 real 0m0.263s 00:03:47.215 user 0m0.218s 00:03:47.215 sys 0m0.035s 00:03:47.215 19:04:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:47.215 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.215 19:04:24 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:03:47.215 19:04:24 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:03:47.215 19:04:24 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:47.215 19:04:24 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:47.215 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.215 ************************************ 00:03:47.215 START TEST go_rpc 00:03:47.215 ************************************ 00:03:47.215 19:04:24 -- common/autotest_common.sh@1102 -- # go_rpc 00:03:47.216 19:04:24 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:03:47.216 19:04:24 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:03:47.216 19:04:24 -- rpc/rpc.sh@52 -- # jq length 00:03:47.216 19:04:24 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:03:47.216 19:04:24 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:03:47.216 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:47.216 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.216 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:47.216 19:04:24 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:03:47.216 19:04:24 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:03:47.216 19:04:24 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["0af9f6b9-2ffe-4e64-8492-413b3c25221b"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"0af9f6b9-2ffe-4e64-8492-413b3c25221b","zoned":false}]' 00:03:47.216 19:04:24 -- rpc/rpc.sh@57 -- # jq length 00:03:47.216 19:04:24 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:03:47.216 19:04:24 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:47.216 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:47.216 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.216 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:47.216 19:04:24 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:03:47.216 19:04:24 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:03:47.216 19:04:24 -- rpc/rpc.sh@61 -- # jq length 00:03:47.474 ************************************ 00:03:47.474 END TEST go_rpc 00:03:47.474 ************************************ 00:03:47.474 19:04:24 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:03:47.474 00:03:47.474 real 0m0.237s 00:03:47.474 user 0m0.157s 00:03:47.474 sys 0m0.042s 00:03:47.474 19:04:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:47.474 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.474 19:04:24 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:47.474 19:04:24 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:47.474 19:04:24 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:47.474 19:04:24 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:47.474 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.474 ************************************ 00:03:47.474 START TEST rpc_daemon_integrity 00:03:47.474 ************************************ 00:03:47.474 19:04:24 -- common/autotest_common.sh@1102 -- # rpc_integrity 00:03:47.474 19:04:24 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:47.474 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:47.474 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.474 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:47.474 19:04:24 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:47.474 19:04:24 -- rpc/rpc.sh@13 -- # jq length 00:03:47.474 19:04:24 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:47.474 19:04:24 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:47.474 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:47.474 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.474 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:47.474 19:04:24 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:03:47.474 19:04:24 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:47.474 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:47.474 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.474 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:47.474 19:04:24 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:47.474 { 00:03:47.474 "aliases": [ 00:03:47.474 "3a65a283-00ad-4611-ac21-d4a51c0a4e71" 00:03:47.474 ], 00:03:47.474 "assigned_rate_limits": { 00:03:47.474 "r_mbytes_per_sec": 0, 00:03:47.474 "rw_ios_per_sec": 0, 00:03:47.474 "rw_mbytes_per_sec": 0, 00:03:47.474 "w_mbytes_per_sec": 0 00:03:47.474 }, 00:03:47.474 "block_size": 512, 00:03:47.474 "claimed": false, 00:03:47.474 "driver_specific": {}, 00:03:47.474 "memory_domains": [ 00:03:47.474 { 00:03:47.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.474 "dma_device_type": 2 00:03:47.474 } 00:03:47.474 ], 00:03:47.474 "name": "Malloc3", 00:03:47.474 "num_blocks": 16384, 00:03:47.474 "product_name": "Malloc disk", 00:03:47.474 "supported_io_types": { 00:03:47.474 "abort": true, 00:03:47.474 "compare": false, 00:03:47.474 "compare_and_write": false, 00:03:47.474 "flush": true, 00:03:47.474 "nvme_admin": false, 00:03:47.474 "nvme_io": false, 00:03:47.474 "read": true, 00:03:47.474 "reset": true, 00:03:47.474 "unmap": true, 00:03:47.474 "write": true, 00:03:47.474 "write_zeroes": true 00:03:47.474 }, 00:03:47.474 "uuid": "3a65a283-00ad-4611-ac21-d4a51c0a4e71", 00:03:47.474 "zoned": false 00:03:47.474 } 00:03:47.474 ]' 00:03:47.474 19:04:24 -- rpc/rpc.sh@17 -- # jq length 00:03:47.474 19:04:24 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:47.474 19:04:24 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:03:47.474 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:47.474 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.474 [2024-02-14 19:04:24.876841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:03:47.474 [2024-02-14 19:04:24.876910] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:47.474 [2024-02-14 19:04:24.876931] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b66990 00:03:47.474 [2024-02-14 19:04:24.876941] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:47.474 [2024-02-14 19:04:24.878937] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:47.474 [2024-02-14 19:04:24.878977] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:47.474 Passthru0 00:03:47.474 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:47.474 19:04:24 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:47.474 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:47.474 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.733 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:47.733 19:04:24 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:47.733 { 00:03:47.733 "aliases": [ 00:03:47.733 "3a65a283-00ad-4611-ac21-d4a51c0a4e71" 00:03:47.733 ], 00:03:47.733 "assigned_rate_limits": { 00:03:47.733 "r_mbytes_per_sec": 0, 00:03:47.733 "rw_ios_per_sec": 0, 00:03:47.733 "rw_mbytes_per_sec": 0, 00:03:47.733 "w_mbytes_per_sec": 0 00:03:47.733 }, 00:03:47.733 "block_size": 512, 00:03:47.733 "claim_type": "exclusive_write", 00:03:47.733 "claimed": true, 00:03:47.733 "driver_specific": {}, 00:03:47.733 "memory_domains": [ 00:03:47.733 { 00:03:47.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.733 "dma_device_type": 2 00:03:47.733 } 00:03:47.733 ], 00:03:47.733 "name": "Malloc3", 00:03:47.733 "num_blocks": 16384, 00:03:47.733 "product_name": "Malloc disk", 00:03:47.733 "supported_io_types": { 00:03:47.733 "abort": true, 00:03:47.733 "compare": false, 00:03:47.733 "compare_and_write": false, 00:03:47.733 "flush": true, 00:03:47.733 "nvme_admin": false, 00:03:47.733 "nvme_io": false, 00:03:47.733 "read": true, 00:03:47.733 "reset": true, 00:03:47.733 "unmap": true, 00:03:47.733 "write": true, 00:03:47.733 "write_zeroes": true 00:03:47.733 }, 00:03:47.733 "uuid": "3a65a283-00ad-4611-ac21-d4a51c0a4e71", 00:03:47.733 "zoned": false 00:03:47.733 }, 00:03:47.733 { 00:03:47.733 "aliases": [ 00:03:47.733 "700a3a43-a62d-5942-b74c-9631e5fa2c11" 00:03:47.733 ], 00:03:47.733 "assigned_rate_limits": { 00:03:47.733 "r_mbytes_per_sec": 0, 00:03:47.733 "rw_ios_per_sec": 0, 00:03:47.733 "rw_mbytes_per_sec": 0, 00:03:47.733 "w_mbytes_per_sec": 0 00:03:47.733 }, 00:03:47.733 "block_size": 512, 00:03:47.733 "claimed": false, 00:03:47.733 "driver_specific": { 00:03:47.733 "passthru": { 00:03:47.733 "base_bdev_name": "Malloc3", 00:03:47.733 "name": "Passthru0" 00:03:47.733 } 00:03:47.733 }, 00:03:47.733 "memory_domains": [ 00:03:47.733 { 00:03:47.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.733 "dma_device_type": 2 00:03:47.733 } 00:03:47.733 ], 00:03:47.733 "name": "Passthru0", 00:03:47.733 "num_blocks": 16384, 00:03:47.733 "product_name": "passthru", 00:03:47.733 "supported_io_types": { 00:03:47.733 "abort": true, 00:03:47.733 "compare": false, 00:03:47.733 "compare_and_write": false, 00:03:47.733 "flush": true, 00:03:47.733 "nvme_admin": false, 00:03:47.733 "nvme_io": false, 00:03:47.733 "read": true, 00:03:47.733 "reset": true, 00:03:47.733 "unmap": true, 00:03:47.733 "write": true, 00:03:47.733 "write_zeroes": true 00:03:47.733 }, 00:03:47.733 "uuid": "700a3a43-a62d-5942-b74c-9631e5fa2c11", 00:03:47.733 "zoned": false 00:03:47.733 } 00:03:47.733 ]' 00:03:47.733 19:04:24 -- rpc/rpc.sh@21 -- # jq length 00:03:47.733 19:04:24 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:47.733 19:04:24 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:47.733 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:47.733 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.733 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:47.733 19:04:24 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:03:47.733 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:47.733 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.733 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:47.733 19:04:24 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:47.733 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:47.733 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.733 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:47.733 19:04:24 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:47.733 19:04:24 -- rpc/rpc.sh@26 -- # jq length 00:03:47.733 19:04:25 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:47.733 00:03:47.733 real 0m0.320s 00:03:47.733 user 0m0.200s 00:03:47.733 sys 0m0.046s 00:03:47.733 19:04:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:47.733 19:04:25 -- common/autotest_common.sh@10 -- # set +x 00:03:47.733 ************************************ 00:03:47.734 END TEST rpc_daemon_integrity 00:03:47.734 ************************************ 00:03:47.734 19:04:25 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:47.734 19:04:25 -- rpc/rpc.sh@84 -- # killprocess 55590 00:03:47.734 19:04:25 -- common/autotest_common.sh@924 -- # '[' -z 55590 ']' 00:03:47.734 19:04:25 -- common/autotest_common.sh@928 -- # kill -0 55590 00:03:47.734 19:04:25 -- common/autotest_common.sh@929 -- # uname 00:03:47.734 19:04:25 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:03:47.734 19:04:25 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 55590 00:03:47.734 19:04:25 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:03:47.734 killing process with pid 55590 00:03:47.734 19:04:25 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:03:47.734 19:04:25 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 55590' 00:03:47.734 19:04:25 -- common/autotest_common.sh@943 -- # kill 55590 00:03:47.734 19:04:25 -- common/autotest_common.sh@948 -- # wait 55590 00:03:48.357 00:03:48.357 real 0m3.290s 00:03:48.357 user 0m4.260s 00:03:48.357 sys 0m0.831s 00:03:48.357 19:04:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:48.357 19:04:25 -- common/autotest_common.sh@10 -- # set +x 00:03:48.357 ************************************ 00:03:48.357 END TEST rpc 00:03:48.357 ************************************ 00:03:48.357 19:04:25 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:03:48.357 19:04:25 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:48.357 19:04:25 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:48.357 19:04:25 -- common/autotest_common.sh@10 -- # set +x 00:03:48.357 ************************************ 00:03:48.357 START TEST rpc_client 00:03:48.357 ************************************ 00:03:48.357 19:04:25 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:03:48.357 * Looking for test storage... 00:03:48.616 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:03:48.616 19:04:25 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:03:48.616 OK 00:03:48.616 19:04:25 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:48.616 00:03:48.616 real 0m0.107s 00:03:48.616 user 0m0.052s 00:03:48.616 sys 0m0.063s 00:03:48.616 19:04:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:48.616 19:04:25 -- common/autotest_common.sh@10 -- # set +x 00:03:48.616 ************************************ 00:03:48.616 END TEST rpc_client 00:03:48.616 ************************************ 00:03:48.616 19:04:25 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:03:48.616 19:04:25 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:48.616 19:04:25 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:48.616 19:04:25 -- common/autotest_common.sh@10 -- # set +x 00:03:48.616 ************************************ 00:03:48.616 START TEST json_config 00:03:48.616 ************************************ 00:03:48.616 19:04:25 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:03:48.616 19:04:25 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:48.616 19:04:25 -- nvmf/common.sh@7 -- # uname -s 00:03:48.616 19:04:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:48.616 19:04:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:48.616 19:04:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:48.616 19:04:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:48.616 19:04:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:48.616 19:04:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:48.616 19:04:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:48.616 19:04:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:48.616 19:04:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:48.616 19:04:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:48.616 19:04:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:03:48.616 19:04:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:03:48.616 19:04:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:48.616 19:04:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:48.617 19:04:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:48.617 19:04:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:48.617 19:04:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:48.617 19:04:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:48.617 19:04:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:48.617 19:04:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.617 19:04:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.617 19:04:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.617 19:04:25 -- paths/export.sh@5 -- # export PATH 00:03:48.617 19:04:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.617 19:04:25 -- nvmf/common.sh@46 -- # : 0 00:03:48.617 19:04:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:48.617 19:04:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:48.617 19:04:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:48.617 19:04:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:48.617 19:04:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:48.617 19:04:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:48.617 19:04:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:48.617 19:04:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:48.617 19:04:25 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:03:48.617 19:04:25 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:03:48.617 19:04:25 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:03:48.617 19:04:25 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:48.617 19:04:25 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:03:48.617 19:04:25 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:03:48.617 19:04:25 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:48.617 19:04:25 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:03:48.617 19:04:25 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:48.617 19:04:25 -- json_config/json_config.sh@32 -- # declare -A app_params 00:03:48.617 19:04:25 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:03:48.617 19:04:25 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:03:48.617 19:04:25 -- json_config/json_config.sh@43 -- # last_event_id=0 00:03:48.617 19:04:25 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:48.617 INFO: JSON configuration test init 00:03:48.617 19:04:25 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:03:48.617 19:04:25 -- json_config/json_config.sh@420 -- # json_config_test_init 00:03:48.617 19:04:25 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:03:48.617 19:04:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:48.617 19:04:25 -- common/autotest_common.sh@10 -- # set +x 00:03:48.617 19:04:25 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:03:48.617 19:04:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:48.617 19:04:25 -- common/autotest_common.sh@10 -- # set +x 00:03:48.617 19:04:25 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:03:48.617 19:04:25 -- json_config/json_config.sh@98 -- # local app=target 00:03:48.617 19:04:25 -- json_config/json_config.sh@99 -- # shift 00:03:48.617 19:04:25 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:03:48.617 19:04:25 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:03:48.617 19:04:25 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:03:48.617 19:04:25 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:48.617 19:04:25 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:48.617 19:04:25 -- json_config/json_config.sh@111 -- # app_pid[$app]=55895 00:03:48.617 19:04:25 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:03:48.617 Waiting for target to run... 00:03:48.617 19:04:25 -- json_config/json_config.sh@114 -- # waitforlisten 55895 /var/tmp/spdk_tgt.sock 00:03:48.617 19:04:25 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:48.617 19:04:25 -- common/autotest_common.sh@817 -- # '[' -z 55895 ']' 00:03:48.617 19:04:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:48.617 19:04:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:48.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:48.617 19:04:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:48.617 19:04:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:48.617 19:04:25 -- common/autotest_common.sh@10 -- # set +x 00:03:48.617 [2024-02-14 19:04:25.988916] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:03:48.617 [2024-02-14 19:04:25.989016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55895 ] 00:03:49.185 [2024-02-14 19:04:26.456001] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.185 [2024-02-14 19:04:26.561879] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:49.185 [2024-02-14 19:04:26.562081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:49.751 19:04:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:49.751 19:04:26 -- common/autotest_common.sh@850 -- # return 0 00:03:49.751 00:03:49.751 19:04:26 -- json_config/json_config.sh@115 -- # echo '' 00:03:49.751 19:04:26 -- json_config/json_config.sh@322 -- # create_accel_config 00:03:49.751 19:04:26 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:03:49.751 19:04:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:49.751 19:04:26 -- common/autotest_common.sh@10 -- # set +x 00:03:49.751 19:04:26 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:03:49.751 19:04:26 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:03:49.751 19:04:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:49.751 19:04:26 -- common/autotest_common.sh@10 -- # set +x 00:03:49.751 19:04:26 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:49.751 19:04:26 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:03:49.751 19:04:26 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:50.010 19:04:27 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:03:50.010 19:04:27 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:03:50.010 19:04:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:50.010 19:04:27 -- common/autotest_common.sh@10 -- # set +x 00:03:50.268 19:04:27 -- json_config/json_config.sh@48 -- # local ret=0 00:03:50.268 19:04:27 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:50.268 19:04:27 -- json_config/json_config.sh@49 -- # local enabled_types 00:03:50.268 19:04:27 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:50.268 19:04:27 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:50.268 19:04:27 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:50.528 19:04:27 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:50.528 19:04:27 -- json_config/json_config.sh@51 -- # local get_types 00:03:50.528 19:04:27 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:03:50.528 19:04:27 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:03:50.528 19:04:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:50.528 19:04:27 -- common/autotest_common.sh@10 -- # set +x 00:03:50.528 19:04:27 -- json_config/json_config.sh@58 -- # return 0 00:03:50.528 19:04:27 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:03:50.528 19:04:27 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:03:50.528 19:04:27 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:03:50.528 19:04:27 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:03:50.528 19:04:27 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:03:50.528 19:04:27 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:03:50.528 19:04:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:50.528 19:04:27 -- common/autotest_common.sh@10 -- # set +x 00:03:50.528 19:04:27 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:50.528 19:04:27 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:03:50.528 19:04:27 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:03:50.528 19:04:27 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:50.528 19:04:27 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:50.787 MallocForNvmf0 00:03:50.787 19:04:28 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:50.787 19:04:28 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:51.046 MallocForNvmf1 00:03:51.046 19:04:28 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:51.046 19:04:28 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:51.305 [2024-02-14 19:04:28.513269] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:51.305 19:04:28 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:51.305 19:04:28 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:51.564 19:04:28 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:51.564 19:04:28 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:51.823 19:04:29 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:51.823 19:04:29 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:52.082 19:04:29 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:52.082 19:04:29 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:52.082 [2024-02-14 19:04:29.497808] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:52.340 19:04:29 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:03:52.340 19:04:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:52.340 19:04:29 -- common/autotest_common.sh@10 -- # set +x 00:03:52.340 19:04:29 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:03:52.340 19:04:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:52.340 19:04:29 -- common/autotest_common.sh@10 -- # set +x 00:03:52.340 19:04:29 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:03:52.340 19:04:29 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:52.340 19:04:29 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:52.599 MallocBdevForConfigChangeCheck 00:03:52.599 19:04:29 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:03:52.599 19:04:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:52.599 19:04:29 -- common/autotest_common.sh@10 -- # set +x 00:03:52.599 19:04:29 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:03:52.599 19:04:29 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:53.167 INFO: shutting down applications... 00:03:53.167 19:04:30 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:03:53.167 19:04:30 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:03:53.167 19:04:30 -- json_config/json_config.sh@431 -- # json_config_clear target 00:03:53.167 19:04:30 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:03:53.167 19:04:30 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:53.426 Calling clear_iscsi_subsystem 00:03:53.426 Calling clear_nvmf_subsystem 00:03:53.426 Calling clear_nbd_subsystem 00:03:53.426 Calling clear_ublk_subsystem 00:03:53.426 Calling clear_vhost_blk_subsystem 00:03:53.426 Calling clear_vhost_scsi_subsystem 00:03:53.426 Calling clear_scheduler_subsystem 00:03:53.426 Calling clear_bdev_subsystem 00:03:53.426 Calling clear_accel_subsystem 00:03:53.426 Calling clear_vmd_subsystem 00:03:53.426 Calling clear_sock_subsystem 00:03:53.426 Calling clear_iobuf_subsystem 00:03:53.426 19:04:30 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:03:53.426 19:04:30 -- json_config/json_config.sh@396 -- # count=100 00:03:53.426 19:04:30 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:03:53.426 19:04:30 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:53.426 19:04:30 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:53.426 19:04:30 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:03:53.686 19:04:31 -- json_config/json_config.sh@398 -- # break 00:03:53.686 19:04:31 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:03:53.686 19:04:31 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:03:53.686 19:04:31 -- json_config/json_config.sh@120 -- # local app=target 00:03:53.686 19:04:31 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:03:53.686 19:04:31 -- json_config/json_config.sh@124 -- # [[ -n 55895 ]] 00:03:53.686 19:04:31 -- json_config/json_config.sh@127 -- # kill -SIGINT 55895 00:03:53.686 19:04:31 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:03:53.686 19:04:31 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:03:53.686 19:04:31 -- json_config/json_config.sh@130 -- # kill -0 55895 00:03:53.686 19:04:31 -- json_config/json_config.sh@134 -- # sleep 0.5 00:03:54.254 19:04:31 -- json_config/json_config.sh@129 -- # (( i++ )) 00:03:54.254 19:04:31 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:03:54.254 19:04:31 -- json_config/json_config.sh@130 -- # kill -0 55895 00:03:54.254 19:04:31 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:03:54.254 19:04:31 -- json_config/json_config.sh@132 -- # break 00:03:54.254 19:04:31 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:03:54.254 19:04:31 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:03:54.254 SPDK target shutdown done 00:03:54.254 INFO: relaunching applications... 00:03:54.254 19:04:31 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:03:54.254 19:04:31 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:54.254 19:04:31 -- json_config/json_config.sh@98 -- # local app=target 00:03:54.254 19:04:31 -- json_config/json_config.sh@99 -- # shift 00:03:54.254 19:04:31 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:03:54.254 19:04:31 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:03:54.254 19:04:31 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:03:54.254 19:04:31 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:54.254 19:04:31 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:54.254 19:04:31 -- json_config/json_config.sh@111 -- # app_pid[$app]=56171 00:03:54.254 19:04:31 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:03:54.254 Waiting for target to run... 00:03:54.254 19:04:31 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:54.254 19:04:31 -- json_config/json_config.sh@114 -- # waitforlisten 56171 /var/tmp/spdk_tgt.sock 00:03:54.254 19:04:31 -- common/autotest_common.sh@817 -- # '[' -z 56171 ']' 00:03:54.254 19:04:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:54.254 19:04:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:54.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:54.254 19:04:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:54.254 19:04:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:54.254 19:04:31 -- common/autotest_common.sh@10 -- # set +x 00:03:54.254 [2024-02-14 19:04:31.648659] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:03:54.254 [2024-02-14 19:04:31.648782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56171 ] 00:03:54.823 [2024-02-14 19:04:32.197289] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.081 [2024-02-14 19:04:32.312041] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:55.081 [2024-02-14 19:04:32.312237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.081 [2024-02-14 19:04:32.312278] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:03:55.340 [2024-02-14 19:04:32.625864] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:55.340 [2024-02-14 19:04:32.658025] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:56.275 19:04:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:56.275 00:03:56.275 19:04:33 -- common/autotest_common.sh@850 -- # return 0 00:03:56.275 19:04:33 -- json_config/json_config.sh@115 -- # echo '' 00:03:56.275 INFO: Checking if target configuration is the same... 00:03:56.275 19:04:33 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:03:56.275 19:04:33 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:56.275 19:04:33 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:56.275 19:04:33 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:03:56.275 19:04:33 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:56.275 + '[' 2 -ne 2 ']' 00:03:56.275 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:03:56.275 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:03:56.275 + rootdir=/home/vagrant/spdk_repo/spdk 00:03:56.275 +++ basename /dev/fd/62 00:03:56.275 ++ mktemp /tmp/62.XXX 00:03:56.275 + tmp_file_1=/tmp/62.hBb 00:03:56.275 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:56.275 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:56.275 + tmp_file_2=/tmp/spdk_tgt_config.json.otU 00:03:56.275 + ret=0 00:03:56.275 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:56.533 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:56.533 + diff -u /tmp/62.hBb /tmp/spdk_tgt_config.json.otU 00:03:56.533 INFO: JSON config files are the same 00:03:56.533 + echo 'INFO: JSON config files are the same' 00:03:56.533 + rm /tmp/62.hBb /tmp/spdk_tgt_config.json.otU 00:03:56.533 + exit 0 00:03:56.533 19:04:33 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:03:56.533 INFO: changing configuration and checking if this can be detected... 00:03:56.533 19:04:33 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:56.534 19:04:33 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:56.534 19:04:33 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:57.100 19:04:34 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:57.100 19:04:34 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:03:57.100 19:04:34 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:57.100 + '[' 2 -ne 2 ']' 00:03:57.100 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:03:57.100 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:03:57.100 + rootdir=/home/vagrant/spdk_repo/spdk 00:03:57.100 +++ basename /dev/fd/62 00:03:57.100 ++ mktemp /tmp/62.XXX 00:03:57.100 + tmp_file_1=/tmp/62.A3Y 00:03:57.100 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:57.100 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:57.100 + tmp_file_2=/tmp/spdk_tgt_config.json.vT0 00:03:57.100 + ret=0 00:03:57.100 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:57.359 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:57.359 + diff -u /tmp/62.A3Y /tmp/spdk_tgt_config.json.vT0 00:03:57.359 + ret=1 00:03:57.359 + echo '=== Start of file: /tmp/62.A3Y ===' 00:03:57.359 + cat /tmp/62.A3Y 00:03:57.359 + echo '=== End of file: /tmp/62.A3Y ===' 00:03:57.359 + echo '' 00:03:57.359 + echo '=== Start of file: /tmp/spdk_tgt_config.json.vT0 ===' 00:03:57.359 + cat /tmp/spdk_tgt_config.json.vT0 00:03:57.359 + echo '=== End of file: /tmp/spdk_tgt_config.json.vT0 ===' 00:03:57.359 + echo '' 00:03:57.359 + rm /tmp/62.A3Y /tmp/spdk_tgt_config.json.vT0 00:03:57.359 + exit 1 00:03:57.359 INFO: configuration change detected. 00:03:57.359 19:04:34 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:03:57.359 19:04:34 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:03:57.359 19:04:34 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:03:57.359 19:04:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:57.359 19:04:34 -- common/autotest_common.sh@10 -- # set +x 00:03:57.359 19:04:34 -- json_config/json_config.sh@360 -- # local ret=0 00:03:57.359 19:04:34 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:03:57.359 19:04:34 -- json_config/json_config.sh@370 -- # [[ -n 56171 ]] 00:03:57.359 19:04:34 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:03:57.359 19:04:34 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:03:57.359 19:04:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:57.359 19:04:34 -- common/autotest_common.sh@10 -- # set +x 00:03:57.359 19:04:34 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:03:57.359 19:04:34 -- json_config/json_config.sh@246 -- # uname -s 00:03:57.359 19:04:34 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:03:57.359 19:04:34 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:03:57.359 19:04:34 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:03:57.359 19:04:34 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:03:57.359 19:04:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:57.359 19:04:34 -- common/autotest_common.sh@10 -- # set +x 00:03:57.618 19:04:34 -- json_config/json_config.sh@376 -- # killprocess 56171 00:03:57.618 19:04:34 -- common/autotest_common.sh@924 -- # '[' -z 56171 ']' 00:03:57.618 19:04:34 -- common/autotest_common.sh@928 -- # kill -0 56171 00:03:57.618 19:04:34 -- common/autotest_common.sh@929 -- # uname 00:03:57.618 19:04:34 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:03:57.618 19:04:34 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 56171 00:03:57.618 19:04:34 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:03:57.618 killing process with pid 56171 00:03:57.618 19:04:34 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:03:57.618 19:04:34 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 56171' 00:03:57.618 19:04:34 -- common/autotest_common.sh@943 -- # kill 56171 00:03:57.618 [2024-02-14 19:04:34.814886] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:03:57.618 19:04:34 -- common/autotest_common.sh@948 -- # wait 56171 00:03:57.877 19:04:35 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:57.877 19:04:35 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:03:57.877 19:04:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:57.877 19:04:35 -- common/autotest_common.sh@10 -- # set +x 00:03:57.877 19:04:35 -- json_config/json_config.sh@381 -- # return 0 00:03:57.877 INFO: Success 00:03:57.877 19:04:35 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:03:57.877 00:03:57.877 real 0m9.330s 00:03:57.877 user 0m13.181s 00:03:57.878 sys 0m2.115s 00:03:57.878 19:04:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:57.878 19:04:35 -- common/autotest_common.sh@10 -- # set +x 00:03:57.878 ************************************ 00:03:57.878 END TEST json_config 00:03:57.878 ************************************ 00:03:57.878 19:04:35 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:03:57.878 19:04:35 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:57.878 19:04:35 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:57.878 19:04:35 -- common/autotest_common.sh@10 -- # set +x 00:03:57.878 ************************************ 00:03:57.878 START TEST json_config_extra_key 00:03:57.878 ************************************ 00:03:57.878 19:04:35 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:03:57.878 19:04:35 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:57.878 19:04:35 -- nvmf/common.sh@7 -- # uname -s 00:03:57.878 19:04:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:57.878 19:04:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:57.878 19:04:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:57.878 19:04:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:57.878 19:04:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:57.878 19:04:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:57.878 19:04:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:57.878 19:04:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:57.878 19:04:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:57.878 19:04:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:57.878 19:04:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:03:57.878 19:04:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:03:57.878 19:04:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:57.878 19:04:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:57.878 19:04:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:57.878 19:04:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:57.878 19:04:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:57.878 19:04:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:58.137 19:04:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:58.137 19:04:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.137 19:04:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.137 19:04:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.137 19:04:35 -- paths/export.sh@5 -- # export PATH 00:03:58.137 19:04:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.138 19:04:35 -- nvmf/common.sh@46 -- # : 0 00:03:58.138 19:04:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:58.138 19:04:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:58.138 19:04:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:58.138 19:04:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:58.138 19:04:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:58.138 19:04:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:58.138 19:04:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:58.138 19:04:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:58.138 19:04:35 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:03:58.138 19:04:35 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:03:58.138 19:04:35 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:58.138 19:04:35 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:03:58.138 19:04:35 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:58.138 19:04:35 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:03:58.138 19:04:35 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:03:58.138 19:04:35 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:03:58.138 19:04:35 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:58.138 INFO: launching applications... 00:03:58.138 19:04:35 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:03:58.138 19:04:35 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:03:58.138 19:04:35 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:03:58.138 19:04:35 -- json_config/json_config_extra_key.sh@25 -- # shift 00:03:58.138 19:04:35 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:03:58.138 19:04:35 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:03:58.138 19:04:35 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=56365 00:03:58.138 Waiting for target to run... 00:03:58.138 19:04:35 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:03:58.138 19:04:35 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 56365 /var/tmp/spdk_tgt.sock 00:03:58.138 19:04:35 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:03:58.138 19:04:35 -- common/autotest_common.sh@817 -- # '[' -z 56365 ']' 00:03:58.138 19:04:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:58.138 19:04:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:58.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:58.138 19:04:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:58.138 19:04:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:58.138 19:04:35 -- common/autotest_common.sh@10 -- # set +x 00:03:58.138 [2024-02-14 19:04:35.377069] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:03:58.138 [2024-02-14 19:04:35.377208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56365 ] 00:03:58.705 [2024-02-14 19:04:35.884525] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.705 [2024-02-14 19:04:35.986991] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:58.705 [2024-02-14 19:04:35.987212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.705 [2024-02-14 19:04:35.987266] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:03:59.272 19:04:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:59.272 00:03:59.272 19:04:36 -- common/autotest_common.sh@850 -- # return 0 00:03:59.272 19:04:36 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:03:59.272 INFO: shutting down applications... 00:03:59.272 19:04:36 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:03:59.272 19:04:36 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:03:59.272 19:04:36 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:03:59.272 19:04:36 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:03:59.272 19:04:36 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 56365 ]] 00:03:59.272 19:04:36 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 56365 00:03:59.272 [2024-02-14 19:04:36.469383] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:03:59.272 19:04:36 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:03:59.272 19:04:36 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:03:59.272 19:04:36 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56365 00:03:59.272 19:04:36 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:03:59.841 19:04:36 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:03:59.841 19:04:36 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:03:59.841 19:04:36 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56365 00:03:59.841 19:04:36 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:00.100 19:04:37 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:00.100 19:04:37 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:00.100 19:04:37 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56365 00:04:00.100 19:04:37 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:00.100 19:04:37 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:00.100 19:04:37 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:00.100 SPDK target shutdown done 00:04:00.100 19:04:37 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:00.100 Success 00:04:00.100 19:04:37 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:00.100 00:04:00.100 real 0m2.263s 00:04:00.100 user 0m1.739s 00:04:00.100 sys 0m0.610s 00:04:00.100 19:04:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:00.100 ************************************ 00:04:00.100 19:04:37 -- common/autotest_common.sh@10 -- # set +x 00:04:00.100 END TEST json_config_extra_key 00:04:00.100 ************************************ 00:04:00.359 19:04:37 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:00.359 19:04:37 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:00.359 19:04:37 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:00.359 19:04:37 -- common/autotest_common.sh@10 -- # set +x 00:04:00.359 ************************************ 00:04:00.359 START TEST alias_rpc 00:04:00.359 ************************************ 00:04:00.359 19:04:37 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:00.359 * Looking for test storage... 00:04:00.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:00.359 19:04:37 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:00.359 19:04:37 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56447 00:04:00.359 19:04:37 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56447 00:04:00.359 19:04:37 -- common/autotest_common.sh@817 -- # '[' -z 56447 ']' 00:04:00.359 19:04:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:00.359 19:04:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:00.359 19:04:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:00.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:00.359 19:04:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:00.359 19:04:37 -- common/autotest_common.sh@10 -- # set +x 00:04:00.359 19:04:37 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:00.359 [2024-02-14 19:04:37.717726] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:00.359 [2024-02-14 19:04:37.717891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56447 ] 00:04:00.618 [2024-02-14 19:04:37.858509] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.618 [2024-02-14 19:04:37.983963] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:00.618 [2024-02-14 19:04:37.984135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.553 19:04:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:01.554 19:04:38 -- common/autotest_common.sh@850 -- # return 0 00:04:01.554 19:04:38 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:01.813 19:04:38 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56447 00:04:01.813 19:04:38 -- common/autotest_common.sh@924 -- # '[' -z 56447 ']' 00:04:01.813 19:04:38 -- common/autotest_common.sh@928 -- # kill -0 56447 00:04:01.813 19:04:38 -- common/autotest_common.sh@929 -- # uname 00:04:01.813 19:04:38 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:04:01.813 19:04:38 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 56447 00:04:01.813 19:04:39 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:04:01.813 killing process with pid 56447 00:04:01.813 19:04:39 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:04:01.813 19:04:39 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 56447' 00:04:01.813 19:04:39 -- common/autotest_common.sh@943 -- # kill 56447 00:04:01.813 19:04:39 -- common/autotest_common.sh@948 -- # wait 56447 00:04:02.382 00:04:02.382 real 0m1.999s 00:04:02.382 user 0m2.255s 00:04:02.382 sys 0m0.470s 00:04:02.382 19:04:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:02.382 ************************************ 00:04:02.382 END TEST alias_rpc 00:04:02.382 ************************************ 00:04:02.382 19:04:39 -- common/autotest_common.sh@10 -- # set +x 00:04:02.382 19:04:39 -- spdk/autotest.sh@182 -- # [[ 1 -eq 0 ]] 00:04:02.382 19:04:39 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:02.382 19:04:39 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:02.382 19:04:39 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:02.382 19:04:39 -- common/autotest_common.sh@10 -- # set +x 00:04:02.382 ************************************ 00:04:02.382 START TEST dpdk_mem_utility 00:04:02.382 ************************************ 00:04:02.382 19:04:39 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:02.382 * Looking for test storage... 00:04:02.382 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:02.382 19:04:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:02.382 19:04:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=56538 00:04:02.382 19:04:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:02.382 19:04:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 56538 00:04:02.382 19:04:39 -- common/autotest_common.sh@817 -- # '[' -z 56538 ']' 00:04:02.382 19:04:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.382 19:04:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:02.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.382 19:04:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.382 19:04:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:02.382 19:04:39 -- common/autotest_common.sh@10 -- # set +x 00:04:02.382 [2024-02-14 19:04:39.758464] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:02.382 [2024-02-14 19:04:39.758590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56538 ] 00:04:02.641 [2024-02-14 19:04:39.890429] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.641 [2024-02-14 19:04:40.019527] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:02.641 [2024-02-14 19:04:40.019723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.580 19:04:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:03.580 19:04:40 -- common/autotest_common.sh@850 -- # return 0 00:04:03.580 19:04:40 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:03.580 19:04:40 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:03.580 19:04:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:03.580 19:04:40 -- common/autotest_common.sh@10 -- # set +x 00:04:03.580 { 00:04:03.580 "filename": "/tmp/spdk_mem_dump.txt" 00:04:03.580 } 00:04:03.580 19:04:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:03.580 19:04:40 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:03.580 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:03.580 1 heaps totaling size 814.000000 MiB 00:04:03.580 size: 814.000000 MiB heap id: 0 00:04:03.580 end heaps---------- 00:04:03.580 8 mempools totaling size 598.116089 MiB 00:04:03.580 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:03.580 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:03.580 size: 84.521057 MiB name: bdev_io_56538 00:04:03.580 size: 51.011292 MiB name: evtpool_56538 00:04:03.580 size: 50.003479 MiB name: msgpool_56538 00:04:03.580 size: 21.763794 MiB name: PDU_Pool 00:04:03.580 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:03.580 size: 0.026123 MiB name: Session_Pool 00:04:03.580 end mempools------- 00:04:03.580 6 memzones totaling size 4.142822 MiB 00:04:03.580 size: 1.000366 MiB name: RG_ring_0_56538 00:04:03.580 size: 1.000366 MiB name: RG_ring_1_56538 00:04:03.580 size: 1.000366 MiB name: RG_ring_4_56538 00:04:03.580 size: 1.000366 MiB name: RG_ring_5_56538 00:04:03.580 size: 0.125366 MiB name: RG_ring_2_56538 00:04:03.580 size: 0.015991 MiB name: RG_ring_3_56538 00:04:03.580 end memzones------- 00:04:03.580 19:04:40 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:03.580 heap id: 0 total size: 814.000000 MiB number of busy elements: 226 number of free elements: 15 00:04:03.580 list of free elements. size: 12.485474 MiB 00:04:03.580 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:03.580 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:03.580 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:03.580 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:03.580 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:03.580 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:03.580 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:03.580 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:03.580 element at address: 0x200000200000 with size: 0.837219 MiB 00:04:03.580 element at address: 0x20001aa00000 with size: 0.571899 MiB 00:04:03.580 element at address: 0x20000b200000 with size: 0.489258 MiB 00:04:03.580 element at address: 0x200000800000 with size: 0.486877 MiB 00:04:03.580 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:03.580 element at address: 0x200027e00000 with size: 0.397949 MiB 00:04:03.580 element at address: 0x200003a00000 with size: 0.351501 MiB 00:04:03.580 list of standard malloc elements. size: 199.251953 MiB 00:04:03.580 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:03.580 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:03.580 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:03.580 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:03.580 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:03.580 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:03.580 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:03.580 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:03.580 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:03.580 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:03.580 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:03.580 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:03.580 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:04:03.580 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:04:03.580 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:04:03.580 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:04:03.580 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:04:03.580 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:04:03.580 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:04:03.580 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:04:03.580 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:04:03.580 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:04:03.580 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:04:03.581 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:03.581 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:03.581 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:03.581 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:03.581 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:03.581 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:03.581 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:03.581 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:03.581 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:03.581 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:03.581 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:03.581 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:03.581 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:03.581 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:03.581 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:03.581 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200027e65e00 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200027e65ec0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200027e6cac0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:03.581 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:03.582 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:03.582 list of memzone associated elements. size: 602.262573 MiB 00:04:03.582 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:03.582 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:03.582 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:03.582 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:03.582 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:03.582 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_56538_0 00:04:03.582 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:03.582 associated memzone info: size: 48.002930 MiB name: MP_evtpool_56538_0 00:04:03.582 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:03.582 associated memzone info: size: 48.002930 MiB name: MP_msgpool_56538_0 00:04:03.582 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:03.582 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:03.582 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:03.582 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:03.582 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:03.582 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_56538 00:04:03.582 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:03.582 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_56538 00:04:03.582 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:03.582 associated memzone info: size: 1.007996 MiB name: MP_evtpool_56538 00:04:03.582 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:03.582 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:03.582 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:03.582 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:03.582 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:03.582 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:03.582 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:03.582 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:03.582 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:03.582 associated memzone info: size: 1.000366 MiB name: RG_ring_0_56538 00:04:03.582 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:03.582 associated memzone info: size: 1.000366 MiB name: RG_ring_1_56538 00:04:03.582 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:03.582 associated memzone info: size: 1.000366 MiB name: RG_ring_4_56538 00:04:03.582 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:03.582 associated memzone info: size: 1.000366 MiB name: RG_ring_5_56538 00:04:03.582 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:03.582 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_56538 00:04:03.582 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:03.582 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:03.582 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:03.582 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:03.582 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:03.582 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:03.582 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:03.582 associated memzone info: size: 0.125366 MiB name: RG_ring_2_56538 00:04:03.582 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:03.582 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:03.582 element at address: 0x200027e65f80 with size: 0.023743 MiB 00:04:03.582 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:03.582 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:03.582 associated memzone info: size: 0.015991 MiB name: RG_ring_3_56538 00:04:03.582 element at address: 0x200027e6c0c0 with size: 0.002441 MiB 00:04:03.582 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:03.582 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:04:03.582 associated memzone info: size: 0.000183 MiB name: MP_msgpool_56538 00:04:03.582 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:03.582 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_56538 00:04:03.582 element at address: 0x200027e6cb80 with size: 0.000305 MiB 00:04:03.582 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:03.582 19:04:40 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:03.582 19:04:40 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 56538 00:04:03.582 19:04:40 -- common/autotest_common.sh@924 -- # '[' -z 56538 ']' 00:04:03.582 19:04:40 -- common/autotest_common.sh@928 -- # kill -0 56538 00:04:03.582 19:04:40 -- common/autotest_common.sh@929 -- # uname 00:04:03.582 19:04:40 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:04:03.582 19:04:40 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 56538 00:04:03.842 19:04:40 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:04:03.842 killing process with pid 56538 00:04:03.842 19:04:40 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:04:03.842 19:04:40 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 56538' 00:04:03.842 19:04:40 -- common/autotest_common.sh@943 -- # kill 56538 00:04:03.842 19:04:40 -- common/autotest_common.sh@948 -- # wait 56538 00:04:04.100 ************************************ 00:04:04.100 END TEST dpdk_mem_utility 00:04:04.100 ************************************ 00:04:04.100 00:04:04.100 real 0m1.896s 00:04:04.100 user 0m2.052s 00:04:04.100 sys 0m0.512s 00:04:04.100 19:04:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:04.100 19:04:41 -- common/autotest_common.sh@10 -- # set +x 00:04:04.359 19:04:41 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:04.359 19:04:41 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:04.359 19:04:41 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:04.359 19:04:41 -- common/autotest_common.sh@10 -- # set +x 00:04:04.359 ************************************ 00:04:04.359 START TEST event 00:04:04.359 ************************************ 00:04:04.359 19:04:41 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:04.359 * Looking for test storage... 00:04:04.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:04.359 19:04:41 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:04.359 19:04:41 -- bdev/nbd_common.sh@6 -- # set -e 00:04:04.359 19:04:41 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:04.359 19:04:41 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:04:04.359 19:04:41 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:04.359 19:04:41 -- common/autotest_common.sh@10 -- # set +x 00:04:04.359 ************************************ 00:04:04.359 START TEST event_perf 00:04:04.359 ************************************ 00:04:04.359 19:04:41 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:04.359 Running I/O for 1 seconds...[2024-02-14 19:04:41.671460] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:04.359 [2024-02-14 19:04:41.671593] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56631 ] 00:04:04.618 [2024-02-14 19:04:41.811258] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:04.618 [2024-02-14 19:04:41.935250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:04.618 [2024-02-14 19:04:41.935389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:04.618 [2024-02-14 19:04:41.935544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.618 Running I/O for 1 seconds...[2024-02-14 19:04:41.935544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:05.994 00:04:05.994 lcore 0: 138774 00:04:05.994 lcore 1: 138771 00:04:05.994 lcore 2: 138774 00:04:05.994 lcore 3: 138776 00:04:05.994 done. 00:04:05.994 00:04:05.994 real 0m1.423s 00:04:05.994 user 0m4.224s 00:04:05.994 sys 0m0.063s 00:04:05.994 19:04:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:05.994 19:04:43 -- common/autotest_common.sh@10 -- # set +x 00:04:05.994 ************************************ 00:04:05.994 END TEST event_perf 00:04:05.994 ************************************ 00:04:05.994 19:04:43 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:05.994 19:04:43 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:04:05.994 19:04:43 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:05.994 19:04:43 -- common/autotest_common.sh@10 -- # set +x 00:04:05.994 ************************************ 00:04:05.994 START TEST event_reactor 00:04:05.994 ************************************ 00:04:05.994 19:04:43 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:05.994 [2024-02-14 19:04:43.154472] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:05.994 [2024-02-14 19:04:43.155214] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56664 ] 00:04:05.994 [2024-02-14 19:04:43.293082] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.252 [2024-02-14 19:04:43.419725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.187 test_start 00:04:07.187 oneshot 00:04:07.187 tick 100 00:04:07.187 tick 100 00:04:07.187 tick 250 00:04:07.187 tick 100 00:04:07.187 tick 100 00:04:07.187 tick 250 00:04:07.187 tick 500 00:04:07.187 tick 100 00:04:07.187 tick 100 00:04:07.187 tick 100 00:04:07.187 tick 250 00:04:07.187 tick 100 00:04:07.187 tick 100 00:04:07.187 test_end 00:04:07.187 00:04:07.187 real 0m1.424s 00:04:07.187 user 0m1.257s 00:04:07.187 sys 0m0.057s 00:04:07.187 19:04:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:07.187 19:04:44 -- common/autotest_common.sh@10 -- # set +x 00:04:07.187 ************************************ 00:04:07.187 END TEST event_reactor 00:04:07.187 ************************************ 00:04:07.446 19:04:44 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:07.446 19:04:44 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:04:07.446 19:04:44 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:07.446 19:04:44 -- common/autotest_common.sh@10 -- # set +x 00:04:07.446 ************************************ 00:04:07.446 START TEST event_reactor_perf 00:04:07.446 ************************************ 00:04:07.446 19:04:44 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:07.446 [2024-02-14 19:04:44.640454] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:07.446 [2024-02-14 19:04:44.640612] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56702 ] 00:04:07.446 [2024-02-14 19:04:44.780663] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.705 [2024-02-14 19:04:44.919543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.638 test_start 00:04:08.638 test_end 00:04:08.638 Performance: 369769 events per second 00:04:08.638 00:04:08.638 real 0m1.426s 00:04:08.638 user 0m1.251s 00:04:08.638 sys 0m0.065s 00:04:08.638 19:04:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:08.638 19:04:46 -- common/autotest_common.sh@10 -- # set +x 00:04:08.638 ************************************ 00:04:08.638 END TEST event_reactor_perf 00:04:08.638 ************************************ 00:04:08.903 19:04:46 -- event/event.sh@49 -- # uname -s 00:04:08.903 19:04:46 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:08.903 19:04:46 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:08.903 19:04:46 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:08.903 19:04:46 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:08.903 19:04:46 -- common/autotest_common.sh@10 -- # set +x 00:04:08.903 ************************************ 00:04:08.903 START TEST event_scheduler 00:04:08.903 ************************************ 00:04:08.903 19:04:46 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:08.903 * Looking for test storage... 00:04:08.903 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:08.903 19:04:46 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:08.903 19:04:46 -- scheduler/scheduler.sh@35 -- # scheduler_pid=56760 00:04:08.903 19:04:46 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:08.903 19:04:46 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:08.903 19:04:46 -- scheduler/scheduler.sh@37 -- # waitforlisten 56760 00:04:08.903 19:04:46 -- common/autotest_common.sh@817 -- # '[' -z 56760 ']' 00:04:08.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:08.903 19:04:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:08.903 19:04:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:08.903 19:04:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:08.903 19:04:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:08.903 19:04:46 -- common/autotest_common.sh@10 -- # set +x 00:04:08.903 [2024-02-14 19:04:46.257364] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:08.903 [2024-02-14 19:04:46.257475] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56760 ] 00:04:09.162 [2024-02-14 19:04:46.399460] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:09.162 [2024-02-14 19:04:46.550008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.162 [2024-02-14 19:04:46.550338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:09.162 [2024-02-14 19:04:46.551222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:09.162 [2024-02-14 19:04:46.551229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:10.097 19:04:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:10.097 19:04:47 -- common/autotest_common.sh@850 -- # return 0 00:04:10.097 19:04:47 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:10.097 19:04:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:10.097 19:04:47 -- common/autotest_common.sh@10 -- # set +x 00:04:10.097 POWER: Env isn't set yet! 00:04:10.097 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:10.097 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:10.097 POWER: Cannot set governor of lcore 0 to userspace 00:04:10.097 POWER: Attempting to initialise PSTAT power management... 00:04:10.097 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:10.097 POWER: Cannot set governor of lcore 0 to performance 00:04:10.097 POWER: Attempting to initialise AMD PSTATE power management... 00:04:10.097 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:10.097 POWER: Cannot set governor of lcore 0 to userspace 00:04:10.097 POWER: Attempting to initialise CPPC power management... 00:04:10.097 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:10.097 POWER: Cannot set governor of lcore 0 to userspace 00:04:10.097 POWER: Attempting to initialise VM power management... 00:04:10.097 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:10.097 POWER: Unable to set Power Management Environment for lcore 0 00:04:10.097 [2024-02-14 19:04:47.257136] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:04:10.097 [2024-02-14 19:04:47.257162] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:04:10.097 [2024-02-14 19:04:47.257172] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:04:10.097 19:04:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:10.097 19:04:47 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:10.097 19:04:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:10.097 19:04:47 -- common/autotest_common.sh@10 -- # set +x 00:04:10.097 [2024-02-14 19:04:47.377847] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:10.097 19:04:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:10.097 19:04:47 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:10.097 19:04:47 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:10.097 19:04:47 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:10.097 19:04:47 -- common/autotest_common.sh@10 -- # set +x 00:04:10.098 ************************************ 00:04:10.098 START TEST scheduler_create_thread 00:04:10.098 ************************************ 00:04:10.098 19:04:47 -- common/autotest_common.sh@1102 -- # scheduler_create_thread 00:04:10.098 19:04:47 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:10.098 19:04:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:10.098 19:04:47 -- common/autotest_common.sh@10 -- # set +x 00:04:10.098 2 00:04:10.098 19:04:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:10.098 19:04:47 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:10.098 19:04:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:10.098 19:04:47 -- common/autotest_common.sh@10 -- # set +x 00:04:10.098 3 00:04:10.098 19:04:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:10.098 19:04:47 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:10.098 19:04:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:10.098 19:04:47 -- common/autotest_common.sh@10 -- # set +x 00:04:10.098 4 00:04:10.098 19:04:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:10.098 19:04:47 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:10.098 19:04:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:10.098 19:04:47 -- common/autotest_common.sh@10 -- # set +x 00:04:10.098 5 00:04:10.098 19:04:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:10.098 19:04:47 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:10.098 19:04:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:10.098 19:04:47 -- common/autotest_common.sh@10 -- # set +x 00:04:10.098 6 00:04:10.098 19:04:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:10.098 19:04:47 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:10.098 19:04:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:10.098 19:04:47 -- common/autotest_common.sh@10 -- # set +x 00:04:10.098 7 00:04:10.098 19:04:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:10.098 19:04:47 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:10.098 19:04:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:10.098 19:04:47 -- common/autotest_common.sh@10 -- # set +x 00:04:10.098 8 00:04:10.098 19:04:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:10.098 19:04:47 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:10.098 19:04:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:10.098 19:04:47 -- common/autotest_common.sh@10 -- # set +x 00:04:10.098 9 00:04:10.098 19:04:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:10.098 19:04:47 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:10.098 19:04:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:10.098 19:04:47 -- common/autotest_common.sh@10 -- # set +x 00:04:10.098 10 00:04:10.098 19:04:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:10.098 19:04:47 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:10.098 19:04:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:10.098 19:04:47 -- common/autotest_common.sh@10 -- # set +x 00:04:11.473 19:04:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:11.473 19:04:48 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:11.473 19:04:48 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:11.473 19:04:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:11.473 19:04:48 -- common/autotest_common.sh@10 -- # set +x 00:04:12.406 19:04:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.406 19:04:49 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:12.406 19:04:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.406 19:04:49 -- common/autotest_common.sh@10 -- # set +x 00:04:13.342 19:04:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:13.342 19:04:50 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:13.342 19:04:50 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:13.342 19:04:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:13.342 19:04:50 -- common/autotest_common.sh@10 -- # set +x 00:04:13.910 19:04:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:13.910 ************************************ 00:04:13.910 END TEST scheduler_create_thread 00:04:13.910 ************************************ 00:04:13.910 00:04:13.910 real 0m3.888s 00:04:13.910 user 0m0.023s 00:04:13.910 sys 0m0.007s 00:04:13.910 19:04:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:13.910 19:04:51 -- common/autotest_common.sh@10 -- # set +x 00:04:14.169 19:04:51 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:14.169 19:04:51 -- scheduler/scheduler.sh@46 -- # killprocess 56760 00:04:14.169 19:04:51 -- common/autotest_common.sh@924 -- # '[' -z 56760 ']' 00:04:14.169 19:04:51 -- common/autotest_common.sh@928 -- # kill -0 56760 00:04:14.169 19:04:51 -- common/autotest_common.sh@929 -- # uname 00:04:14.169 19:04:51 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:04:14.169 19:04:51 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 56760 00:04:14.169 19:04:51 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:04:14.169 killing process with pid 56760 00:04:14.169 19:04:51 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:04:14.169 19:04:51 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 56760' 00:04:14.169 19:04:51 -- common/autotest_common.sh@943 -- # kill 56760 00:04:14.169 19:04:51 -- common/autotest_common.sh@948 -- # wait 56760 00:04:14.428 [2024-02-14 19:04:51.658035] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:14.687 00:04:14.687 real 0m5.986s 00:04:14.687 user 0m12.505s 00:04:14.687 sys 0m0.483s 00:04:14.687 19:04:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:14.687 19:04:52 -- common/autotest_common.sh@10 -- # set +x 00:04:14.687 ************************************ 00:04:14.687 END TEST event_scheduler 00:04:14.687 ************************************ 00:04:14.947 19:04:52 -- event/event.sh@51 -- # modprobe -n nbd 00:04:14.947 19:04:52 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:14.947 19:04:52 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:14.947 19:04:52 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:14.947 19:04:52 -- common/autotest_common.sh@10 -- # set +x 00:04:14.947 ************************************ 00:04:14.947 START TEST app_repeat 00:04:14.947 ************************************ 00:04:14.947 19:04:52 -- common/autotest_common.sh@1102 -- # app_repeat_test 00:04:14.947 19:04:52 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:14.947 19:04:52 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:14.947 19:04:52 -- event/event.sh@13 -- # local nbd_list 00:04:14.947 19:04:52 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:14.947 19:04:52 -- event/event.sh@14 -- # local bdev_list 00:04:14.947 19:04:52 -- event/event.sh@15 -- # local repeat_times=4 00:04:14.947 19:04:52 -- event/event.sh@17 -- # modprobe nbd 00:04:14.947 19:04:52 -- event/event.sh@19 -- # repeat_pid=56894 00:04:14.947 19:04:52 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.947 19:04:52 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:14.947 19:04:52 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 56894' 00:04:14.947 Process app_repeat pid: 56894 00:04:14.947 19:04:52 -- event/event.sh@23 -- # for i in {0..2} 00:04:14.947 spdk_app_start Round 0 00:04:14.947 19:04:52 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:14.947 19:04:52 -- event/event.sh@25 -- # waitforlisten 56894 /var/tmp/spdk-nbd.sock 00:04:14.947 19:04:52 -- common/autotest_common.sh@817 -- # '[' -z 56894 ']' 00:04:14.947 19:04:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:14.947 19:04:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:14.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:14.947 19:04:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:14.947 19:04:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:14.947 19:04:52 -- common/autotest_common.sh@10 -- # set +x 00:04:14.947 [2024-02-14 19:04:52.200285] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:14.947 [2024-02-14 19:04:52.201185] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56894 ] 00:04:14.947 [2024-02-14 19:04:52.339927] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:15.206 [2024-02-14 19:04:52.473507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.206 [2024-02-14 19:04:52.473538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:16.143 19:04:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:16.143 19:04:53 -- common/autotest_common.sh@850 -- # return 0 00:04:16.143 19:04:53 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:16.402 Malloc0 00:04:16.402 19:04:53 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:16.661 Malloc1 00:04:16.661 19:04:53 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:16.661 19:04:53 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.661 19:04:53 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:16.661 19:04:53 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:16.661 19:04:53 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:16.661 19:04:53 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:16.661 19:04:53 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:16.661 19:04:53 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.661 19:04:53 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:16.661 19:04:53 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:16.661 19:04:53 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:16.661 19:04:53 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:16.661 19:04:53 -- bdev/nbd_common.sh@12 -- # local i 00:04:16.661 19:04:53 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:16.661 19:04:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:16.661 19:04:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:16.920 /dev/nbd0 00:04:16.920 19:04:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:16.920 19:04:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:16.920 19:04:54 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:16.920 19:04:54 -- common/autotest_common.sh@855 -- # local i 00:04:16.920 19:04:54 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:16.920 19:04:54 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:16.920 19:04:54 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:16.920 19:04:54 -- common/autotest_common.sh@859 -- # break 00:04:16.920 19:04:54 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:16.920 19:04:54 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:16.920 19:04:54 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:16.920 1+0 records in 00:04:16.920 1+0 records out 00:04:16.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419458 s, 9.8 MB/s 00:04:16.920 19:04:54 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:16.920 19:04:54 -- common/autotest_common.sh@872 -- # size=4096 00:04:16.920 19:04:54 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:16.920 19:04:54 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:16.920 19:04:54 -- common/autotest_common.sh@875 -- # return 0 00:04:16.920 19:04:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:16.920 19:04:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:16.920 19:04:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:17.179 /dev/nbd1 00:04:17.179 19:04:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:17.179 19:04:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:17.179 19:04:54 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:17.179 19:04:54 -- common/autotest_common.sh@855 -- # local i 00:04:17.179 19:04:54 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:17.179 19:04:54 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:17.179 19:04:54 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:17.179 19:04:54 -- common/autotest_common.sh@859 -- # break 00:04:17.179 19:04:54 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:17.179 19:04:54 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:17.179 19:04:54 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:17.179 1+0 records in 00:04:17.179 1+0 records out 00:04:17.179 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390287 s, 10.5 MB/s 00:04:17.179 19:04:54 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:17.179 19:04:54 -- common/autotest_common.sh@872 -- # size=4096 00:04:17.179 19:04:54 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:17.179 19:04:54 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:17.179 19:04:54 -- common/autotest_common.sh@875 -- # return 0 00:04:17.179 19:04:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:17.179 19:04:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:17.179 19:04:54 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:17.179 19:04:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.179 19:04:54 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:17.439 19:04:54 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:17.439 { 00:04:17.439 "bdev_name": "Malloc0", 00:04:17.439 "nbd_device": "/dev/nbd0" 00:04:17.439 }, 00:04:17.439 { 00:04:17.439 "bdev_name": "Malloc1", 00:04:17.439 "nbd_device": "/dev/nbd1" 00:04:17.439 } 00:04:17.439 ]' 00:04:17.439 19:04:54 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:17.439 { 00:04:17.439 "bdev_name": "Malloc0", 00:04:17.439 "nbd_device": "/dev/nbd0" 00:04:17.439 }, 00:04:17.439 { 00:04:17.439 "bdev_name": "Malloc1", 00:04:17.439 "nbd_device": "/dev/nbd1" 00:04:17.439 } 00:04:17.439 ]' 00:04:17.439 19:04:54 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:17.439 19:04:54 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:17.439 /dev/nbd1' 00:04:17.439 19:04:54 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:17.439 /dev/nbd1' 00:04:17.439 19:04:54 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:17.439 19:04:54 -- bdev/nbd_common.sh@65 -- # count=2 00:04:17.439 19:04:54 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@95 -- # count=2 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:17.698 256+0 records in 00:04:17.698 256+0 records out 00:04:17.698 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00705205 s, 149 MB/s 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:17.698 256+0 records in 00:04:17.698 256+0 records out 00:04:17.698 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268656 s, 39.0 MB/s 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:17.698 256+0 records in 00:04:17.698 256+0 records out 00:04:17.698 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0288937 s, 36.3 MB/s 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@51 -- # local i 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:17.698 19:04:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:17.957 19:04:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:17.957 19:04:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:17.957 19:04:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:17.957 19:04:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:17.957 19:04:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:17.957 19:04:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:17.957 19:04:55 -- bdev/nbd_common.sh@41 -- # break 00:04:17.957 19:04:55 -- bdev/nbd_common.sh@45 -- # return 0 00:04:17.957 19:04:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:17.957 19:04:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:18.215 19:04:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:18.215 19:04:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:18.215 19:04:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:18.215 19:04:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:18.215 19:04:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:18.215 19:04:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:18.215 19:04:55 -- bdev/nbd_common.sh@41 -- # break 00:04:18.215 19:04:55 -- bdev/nbd_common.sh@45 -- # return 0 00:04:18.215 19:04:55 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:18.215 19:04:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.215 19:04:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:18.474 19:04:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:18.474 19:04:55 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:18.474 19:04:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:18.733 19:04:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:18.733 19:04:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:18.733 19:04:55 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:18.733 19:04:55 -- bdev/nbd_common.sh@65 -- # true 00:04:18.733 19:04:55 -- bdev/nbd_common.sh@65 -- # count=0 00:04:18.733 19:04:55 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:18.733 19:04:55 -- bdev/nbd_common.sh@104 -- # count=0 00:04:18.733 19:04:55 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:18.733 19:04:55 -- bdev/nbd_common.sh@109 -- # return 0 00:04:18.733 19:04:55 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:18.993 19:04:56 -- event/event.sh@35 -- # sleep 3 00:04:19.253 [2024-02-14 19:04:56.583346] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:19.512 [2024-02-14 19:04:56.728196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:19.512 [2024-02-14 19:04:56.728205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.512 [2024-02-14 19:04:56.798041] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:19.512 [2024-02-14 19:04:56.798146] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:22.046 19:04:59 -- event/event.sh@23 -- # for i in {0..2} 00:04:22.046 spdk_app_start Round 1 00:04:22.046 19:04:59 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:22.046 19:04:59 -- event/event.sh@25 -- # waitforlisten 56894 /var/tmp/spdk-nbd.sock 00:04:22.046 19:04:59 -- common/autotest_common.sh@817 -- # '[' -z 56894 ']' 00:04:22.046 19:04:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:22.046 19:04:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:22.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:22.046 19:04:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:22.046 19:04:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:22.046 19:04:59 -- common/autotest_common.sh@10 -- # set +x 00:04:22.305 19:04:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:22.305 19:04:59 -- common/autotest_common.sh@850 -- # return 0 00:04:22.305 19:04:59 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:22.564 Malloc0 00:04:22.564 19:04:59 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:22.823 Malloc1 00:04:22.823 19:05:00 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:22.823 19:05:00 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.823 19:05:00 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:22.823 19:05:00 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:22.823 19:05:00 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.823 19:05:00 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:22.823 19:05:00 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:22.823 19:05:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.823 19:05:00 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:22.823 19:05:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:22.823 19:05:00 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.823 19:05:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:22.823 19:05:00 -- bdev/nbd_common.sh@12 -- # local i 00:04:22.823 19:05:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:22.823 19:05:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:22.823 19:05:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:23.082 /dev/nbd0 00:04:23.082 19:05:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:23.082 19:05:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:23.082 19:05:00 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:23.082 19:05:00 -- common/autotest_common.sh@855 -- # local i 00:04:23.082 19:05:00 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:23.082 19:05:00 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:23.082 19:05:00 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:23.082 19:05:00 -- common/autotest_common.sh@859 -- # break 00:04:23.082 19:05:00 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:23.082 19:05:00 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:23.082 19:05:00 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:23.082 1+0 records in 00:04:23.082 1+0 records out 00:04:23.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275921 s, 14.8 MB/s 00:04:23.082 19:05:00 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:23.082 19:05:00 -- common/autotest_common.sh@872 -- # size=4096 00:04:23.082 19:05:00 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:23.082 19:05:00 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:23.082 19:05:00 -- common/autotest_common.sh@875 -- # return 0 00:04:23.082 19:05:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:23.082 19:05:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.082 19:05:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:23.342 /dev/nbd1 00:04:23.342 19:05:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:23.342 19:05:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:23.342 19:05:00 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:23.342 19:05:00 -- common/autotest_common.sh@855 -- # local i 00:04:23.342 19:05:00 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:23.342 19:05:00 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:23.342 19:05:00 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:23.601 19:05:00 -- common/autotest_common.sh@859 -- # break 00:04:23.601 19:05:00 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:23.601 19:05:00 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:23.601 19:05:00 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:23.601 1+0 records in 00:04:23.601 1+0 records out 00:04:23.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000482568 s, 8.5 MB/s 00:04:23.601 19:05:00 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:23.601 19:05:00 -- common/autotest_common.sh@872 -- # size=4096 00:04:23.601 19:05:00 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:23.601 19:05:00 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:23.601 19:05:00 -- common/autotest_common.sh@875 -- # return 0 00:04:23.601 19:05:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:23.601 19:05:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.602 19:05:00 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:23.602 19:05:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.602 19:05:00 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:23.860 19:05:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:23.860 { 00:04:23.860 "bdev_name": "Malloc0", 00:04:23.860 "nbd_device": "/dev/nbd0" 00:04:23.860 }, 00:04:23.860 { 00:04:23.860 "bdev_name": "Malloc1", 00:04:23.860 "nbd_device": "/dev/nbd1" 00:04:23.860 } 00:04:23.860 ]' 00:04:23.860 19:05:01 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:23.860 { 00:04:23.860 "bdev_name": "Malloc0", 00:04:23.860 "nbd_device": "/dev/nbd0" 00:04:23.860 }, 00:04:23.860 { 00:04:23.860 "bdev_name": "Malloc1", 00:04:23.861 "nbd_device": "/dev/nbd1" 00:04:23.861 } 00:04:23.861 ]' 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:23.861 /dev/nbd1' 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:23.861 /dev/nbd1' 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@65 -- # count=2 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@95 -- # count=2 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:23.861 256+0 records in 00:04:23.861 256+0 records out 00:04:23.861 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00895157 s, 117 MB/s 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:23.861 256+0 records in 00:04:23.861 256+0 records out 00:04:23.861 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.028619 s, 36.6 MB/s 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:23.861 256+0 records in 00:04:23.861 256+0 records out 00:04:23.861 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0333356 s, 31.5 MB/s 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@51 -- # local i 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:23.861 19:05:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:24.429 19:05:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:24.429 19:05:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:24.429 19:05:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:24.429 19:05:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:24.429 19:05:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:24.429 19:05:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:24.429 19:05:01 -- bdev/nbd_common.sh@41 -- # break 00:04:24.429 19:05:01 -- bdev/nbd_common.sh@45 -- # return 0 00:04:24.429 19:05:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:24.429 19:05:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:24.688 19:05:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:24.688 19:05:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:24.688 19:05:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:24.688 19:05:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:24.688 19:05:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:24.688 19:05:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:24.688 19:05:01 -- bdev/nbd_common.sh@41 -- # break 00:04:24.688 19:05:01 -- bdev/nbd_common.sh@45 -- # return 0 00:04:24.688 19:05:01 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:24.688 19:05:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.688 19:05:01 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:24.947 19:05:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:24.947 19:05:02 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:24.947 19:05:02 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:24.947 19:05:02 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:24.947 19:05:02 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:24.947 19:05:02 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:24.947 19:05:02 -- bdev/nbd_common.sh@65 -- # true 00:04:24.947 19:05:02 -- bdev/nbd_common.sh@65 -- # count=0 00:04:24.947 19:05:02 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:24.947 19:05:02 -- bdev/nbd_common.sh@104 -- # count=0 00:04:24.947 19:05:02 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:24.947 19:05:02 -- bdev/nbd_common.sh@109 -- # return 0 00:04:24.947 19:05:02 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:25.206 19:05:02 -- event/event.sh@35 -- # sleep 3 00:04:25.466 [2024-02-14 19:05:02.855774] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:25.724 [2024-02-14 19:05:03.008637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:25.724 [2024-02-14 19:05:03.008643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.724 [2024-02-14 19:05:03.076808] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:25.724 [2024-02-14 19:05:03.076919] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:28.256 19:05:05 -- event/event.sh@23 -- # for i in {0..2} 00:04:28.256 spdk_app_start Round 2 00:04:28.257 19:05:05 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:28.257 19:05:05 -- event/event.sh@25 -- # waitforlisten 56894 /var/tmp/spdk-nbd.sock 00:04:28.257 19:05:05 -- common/autotest_common.sh@817 -- # '[' -z 56894 ']' 00:04:28.257 19:05:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:28.257 19:05:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:28.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:28.257 19:05:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:28.257 19:05:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:28.257 19:05:05 -- common/autotest_common.sh@10 -- # set +x 00:04:28.515 19:05:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:28.515 19:05:05 -- common/autotest_common.sh@850 -- # return 0 00:04:28.515 19:05:05 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:28.773 Malloc0 00:04:28.773 19:05:06 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:29.340 Malloc1 00:04:29.340 19:05:06 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:29.340 19:05:06 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.340 19:05:06 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.340 19:05:06 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:29.340 19:05:06 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.340 19:05:06 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:29.340 19:05:06 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:29.340 19:05:06 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.340 19:05:06 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.340 19:05:06 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:29.340 19:05:06 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.340 19:05:06 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:29.340 19:05:06 -- bdev/nbd_common.sh@12 -- # local i 00:04:29.340 19:05:06 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:29.340 19:05:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.340 19:05:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:29.599 /dev/nbd0 00:04:29.599 19:05:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:29.599 19:05:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:29.599 19:05:06 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:29.599 19:05:06 -- common/autotest_common.sh@855 -- # local i 00:04:29.599 19:05:06 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:29.599 19:05:06 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:29.599 19:05:06 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:29.599 19:05:06 -- common/autotest_common.sh@859 -- # break 00:04:29.599 19:05:06 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:29.599 19:05:06 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:29.599 19:05:06 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:29.599 1+0 records in 00:04:29.599 1+0 records out 00:04:29.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331241 s, 12.4 MB/s 00:04:29.599 19:05:06 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:29.599 19:05:06 -- common/autotest_common.sh@872 -- # size=4096 00:04:29.599 19:05:06 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:29.599 19:05:06 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:29.599 19:05:06 -- common/autotest_common.sh@875 -- # return 0 00:04:29.599 19:05:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:29.599 19:05:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.599 19:05:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:29.857 /dev/nbd1 00:04:29.857 19:05:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:29.857 19:05:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:29.857 19:05:07 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:29.857 19:05:07 -- common/autotest_common.sh@855 -- # local i 00:04:29.857 19:05:07 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:29.857 19:05:07 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:29.857 19:05:07 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:29.857 19:05:07 -- common/autotest_common.sh@859 -- # break 00:04:29.857 19:05:07 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:29.857 19:05:07 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:29.857 19:05:07 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:29.857 1+0 records in 00:04:29.857 1+0 records out 00:04:29.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302455 s, 13.5 MB/s 00:04:29.857 19:05:07 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:29.857 19:05:07 -- common/autotest_common.sh@872 -- # size=4096 00:04:29.857 19:05:07 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:29.857 19:05:07 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:29.857 19:05:07 -- common/autotest_common.sh@875 -- # return 0 00:04:29.858 19:05:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:29.858 19:05:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.858 19:05:07 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:29.858 19:05:07 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.858 19:05:07 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:30.116 { 00:04:30.116 "bdev_name": "Malloc0", 00:04:30.116 "nbd_device": "/dev/nbd0" 00:04:30.116 }, 00:04:30.116 { 00:04:30.116 "bdev_name": "Malloc1", 00:04:30.116 "nbd_device": "/dev/nbd1" 00:04:30.116 } 00:04:30.116 ]' 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:30.116 { 00:04:30.116 "bdev_name": "Malloc0", 00:04:30.116 "nbd_device": "/dev/nbd0" 00:04:30.116 }, 00:04:30.116 { 00:04:30.116 "bdev_name": "Malloc1", 00:04:30.116 "nbd_device": "/dev/nbd1" 00:04:30.116 } 00:04:30.116 ]' 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:30.116 /dev/nbd1' 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:30.116 /dev/nbd1' 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@65 -- # count=2 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@95 -- # count=2 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:30.116 256+0 records in 00:04:30.116 256+0 records out 00:04:30.116 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00858048 s, 122 MB/s 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:30.116 256+0 records in 00:04:30.116 256+0 records out 00:04:30.116 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269315 s, 38.9 MB/s 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:30.116 256+0 records in 00:04:30.116 256+0 records out 00:04:30.116 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.032242 s, 32.5 MB/s 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@51 -- # local i 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.116 19:05:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:30.683 19:05:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:30.683 19:05:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:30.683 19:05:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:30.683 19:05:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:30.683 19:05:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:30.683 19:05:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:30.683 19:05:07 -- bdev/nbd_common.sh@41 -- # break 00:04:30.683 19:05:07 -- bdev/nbd_common.sh@45 -- # return 0 00:04:30.683 19:05:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.683 19:05:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:30.683 19:05:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:30.683 19:05:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:30.683 19:05:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:30.683 19:05:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:30.683 19:05:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:30.683 19:05:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:30.683 19:05:08 -- bdev/nbd_common.sh@41 -- # break 00:04:30.683 19:05:08 -- bdev/nbd_common.sh@45 -- # return 0 00:04:30.683 19:05:08 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:30.683 19:05:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.683 19:05:08 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:30.946 19:05:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:30.946 19:05:08 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:30.946 19:05:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:31.221 19:05:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:31.221 19:05:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:31.221 19:05:08 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:31.221 19:05:08 -- bdev/nbd_common.sh@65 -- # true 00:04:31.221 19:05:08 -- bdev/nbd_common.sh@65 -- # count=0 00:04:31.221 19:05:08 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:31.221 19:05:08 -- bdev/nbd_common.sh@104 -- # count=0 00:04:31.221 19:05:08 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:31.221 19:05:08 -- bdev/nbd_common.sh@109 -- # return 0 00:04:31.221 19:05:08 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:31.485 19:05:08 -- event/event.sh@35 -- # sleep 3 00:04:31.744 [2024-02-14 19:05:08.991956] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:31.744 [2024-02-14 19:05:09.137248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.744 [2024-02-14 19:05:09.137264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.002 [2024-02-14 19:05:09.202542] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:32.002 [2024-02-14 19:05:09.202630] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:34.534 19:05:11 -- event/event.sh@38 -- # waitforlisten 56894 /var/tmp/spdk-nbd.sock 00:04:34.534 19:05:11 -- common/autotest_common.sh@817 -- # '[' -z 56894 ']' 00:04:34.534 19:05:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:34.534 19:05:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:34.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:34.534 19:05:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:34.534 19:05:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:34.534 19:05:11 -- common/autotest_common.sh@10 -- # set +x 00:04:34.794 19:05:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:34.794 19:05:12 -- common/autotest_common.sh@850 -- # return 0 00:04:34.794 19:05:12 -- event/event.sh@39 -- # killprocess 56894 00:04:34.794 19:05:12 -- common/autotest_common.sh@924 -- # '[' -z 56894 ']' 00:04:34.794 19:05:12 -- common/autotest_common.sh@928 -- # kill -0 56894 00:04:34.794 19:05:12 -- common/autotest_common.sh@929 -- # uname 00:04:34.794 19:05:12 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:04:34.794 19:05:12 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 56894 00:04:34.794 19:05:12 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:04:34.794 killing process with pid 56894 00:04:34.794 19:05:12 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:04:34.794 19:05:12 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 56894' 00:04:34.794 19:05:12 -- common/autotest_common.sh@943 -- # kill 56894 00:04:34.794 19:05:12 -- common/autotest_common.sh@948 -- # wait 56894 00:04:35.053 spdk_app_start is called in Round 0. 00:04:35.053 Shutdown signal received, stop current app iteration 00:04:35.053 Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 reinitialization... 00:04:35.053 spdk_app_start is called in Round 1. 00:04:35.053 Shutdown signal received, stop current app iteration 00:04:35.053 Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 reinitialization... 00:04:35.053 spdk_app_start is called in Round 2. 00:04:35.053 Shutdown signal received, stop current app iteration 00:04:35.053 Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 reinitialization... 00:04:35.053 spdk_app_start is called in Round 3. 00:04:35.053 Shutdown signal received, stop current app iteration 00:04:35.053 19:05:12 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:35.053 19:05:12 -- event/event.sh@42 -- # return 0 00:04:35.053 00:04:35.053 real 0m20.176s 00:04:35.053 user 0m44.817s 00:04:35.053 sys 0m3.472s 00:04:35.053 19:05:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:35.053 19:05:12 -- common/autotest_common.sh@10 -- # set +x 00:04:35.053 ************************************ 00:04:35.053 END TEST app_repeat 00:04:35.053 ************************************ 00:04:35.053 19:05:12 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:35.053 19:05:12 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:35.053 19:05:12 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:35.053 19:05:12 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:35.053 19:05:12 -- common/autotest_common.sh@10 -- # set +x 00:04:35.053 ************************************ 00:04:35.053 START TEST cpu_locks 00:04:35.053 ************************************ 00:04:35.053 19:05:12 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:35.311 * Looking for test storage... 00:04:35.311 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:35.312 19:05:12 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:35.312 19:05:12 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:35.312 19:05:12 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:35.312 19:05:12 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:35.312 19:05:12 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:35.312 19:05:12 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:35.312 19:05:12 -- common/autotest_common.sh@10 -- # set +x 00:04:35.312 ************************************ 00:04:35.312 START TEST default_locks 00:04:35.312 ************************************ 00:04:35.312 19:05:12 -- common/autotest_common.sh@1102 -- # default_locks 00:04:35.312 19:05:12 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57529 00:04:35.312 19:05:12 -- event/cpu_locks.sh@47 -- # waitforlisten 57529 00:04:35.312 19:05:12 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:35.312 19:05:12 -- common/autotest_common.sh@817 -- # '[' -z 57529 ']' 00:04:35.312 19:05:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.312 19:05:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:35.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.312 19:05:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.312 19:05:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:35.312 19:05:12 -- common/autotest_common.sh@10 -- # set +x 00:04:35.312 [2024-02-14 19:05:12.566039] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:35.312 [2024-02-14 19:05:12.566156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57529 ] 00:04:35.312 [2024-02-14 19:05:12.704893] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.571 [2024-02-14 19:05:12.831378] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:35.571 [2024-02-14 19:05:12.831567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.507 19:05:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:36.507 19:05:13 -- common/autotest_common.sh@850 -- # return 0 00:04:36.507 19:05:13 -- event/cpu_locks.sh@49 -- # locks_exist 57529 00:04:36.507 19:05:13 -- event/cpu_locks.sh@22 -- # lslocks -p 57529 00:04:36.507 19:05:13 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:36.766 19:05:13 -- event/cpu_locks.sh@50 -- # killprocess 57529 00:04:36.766 19:05:13 -- common/autotest_common.sh@924 -- # '[' -z 57529 ']' 00:04:36.766 19:05:13 -- common/autotest_common.sh@928 -- # kill -0 57529 00:04:36.766 19:05:13 -- common/autotest_common.sh@929 -- # uname 00:04:36.766 19:05:14 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:04:36.766 19:05:14 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 57529 00:04:36.766 19:05:14 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:04:36.766 killing process with pid 57529 00:04:36.766 19:05:14 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:04:36.766 19:05:14 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 57529' 00:04:36.766 19:05:14 -- common/autotest_common.sh@943 -- # kill 57529 00:04:36.766 19:05:14 -- common/autotest_common.sh@948 -- # wait 57529 00:04:37.702 19:05:14 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57529 00:04:37.702 19:05:14 -- common/autotest_common.sh@638 -- # local es=0 00:04:37.702 19:05:14 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 57529 00:04:37.702 19:05:14 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:04:37.702 19:05:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:37.702 19:05:14 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:04:37.702 19:05:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:37.702 19:05:14 -- common/autotest_common.sh@641 -- # waitforlisten 57529 00:04:37.703 19:05:14 -- common/autotest_common.sh@817 -- # '[' -z 57529 ']' 00:04:37.703 19:05:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.703 19:05:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:37.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.703 19:05:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.703 19:05:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:37.703 19:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.703 ERROR: process (pid: 57529) is no longer running 00:04:37.703 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (57529) - No such process 00:04:37.703 19:05:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:37.703 19:05:14 -- common/autotest_common.sh@850 -- # return 1 00:04:37.703 19:05:14 -- common/autotest_common.sh@641 -- # es=1 00:04:37.703 19:05:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:37.703 19:05:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:37.703 19:05:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:37.703 19:05:14 -- event/cpu_locks.sh@54 -- # no_locks 00:04:37.703 19:05:14 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:37.703 19:05:14 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:37.703 19:05:14 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:37.703 00:04:37.703 real 0m2.360s 00:04:37.703 user 0m2.500s 00:04:37.703 sys 0m0.609s 00:04:37.703 19:05:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:37.703 ************************************ 00:04:37.703 END TEST default_locks 00:04:37.703 19:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.703 ************************************ 00:04:37.703 19:05:14 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:37.703 19:05:14 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:37.703 19:05:14 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:37.703 19:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.703 ************************************ 00:04:37.703 START TEST default_locks_via_rpc 00:04:37.703 ************************************ 00:04:37.703 19:05:14 -- common/autotest_common.sh@1102 -- # default_locks_via_rpc 00:04:37.703 19:05:14 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57597 00:04:37.703 19:05:14 -- event/cpu_locks.sh@63 -- # waitforlisten 57597 00:04:37.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.703 19:05:14 -- common/autotest_common.sh@817 -- # '[' -z 57597 ']' 00:04:37.703 19:05:14 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:37.703 19:05:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.703 19:05:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:37.703 19:05:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.703 19:05:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:37.703 19:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.703 [2024-02-14 19:05:14.985806] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:37.703 [2024-02-14 19:05:14.985944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57597 ] 00:04:37.962 [2024-02-14 19:05:15.125425] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.962 [2024-02-14 19:05:15.265040] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:37.962 [2024-02-14 19:05:15.265210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.898 19:05:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:38.898 19:05:15 -- common/autotest_common.sh@850 -- # return 0 00:04:38.898 19:05:15 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:38.898 19:05:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:38.898 19:05:15 -- common/autotest_common.sh@10 -- # set +x 00:04:38.898 19:05:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:38.898 19:05:15 -- event/cpu_locks.sh@67 -- # no_locks 00:04:38.898 19:05:15 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:38.898 19:05:15 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:38.898 19:05:15 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:38.898 19:05:15 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:38.898 19:05:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:38.898 19:05:15 -- common/autotest_common.sh@10 -- # set +x 00:04:38.898 19:05:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:38.898 19:05:15 -- event/cpu_locks.sh@71 -- # locks_exist 57597 00:04:38.898 19:05:15 -- event/cpu_locks.sh@22 -- # lslocks -p 57597 00:04:38.898 19:05:15 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:39.157 19:05:16 -- event/cpu_locks.sh@73 -- # killprocess 57597 00:04:39.157 19:05:16 -- common/autotest_common.sh@924 -- # '[' -z 57597 ']' 00:04:39.157 19:05:16 -- common/autotest_common.sh@928 -- # kill -0 57597 00:04:39.157 19:05:16 -- common/autotest_common.sh@929 -- # uname 00:04:39.157 19:05:16 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:04:39.157 19:05:16 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 57597 00:04:39.157 killing process with pid 57597 00:04:39.157 19:05:16 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:04:39.157 19:05:16 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:04:39.157 19:05:16 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 57597' 00:04:39.157 19:05:16 -- common/autotest_common.sh@943 -- # kill 57597 00:04:39.157 19:05:16 -- common/autotest_common.sh@948 -- # wait 57597 00:04:39.725 00:04:39.725 real 0m2.000s 00:04:39.725 user 0m2.118s 00:04:39.725 sys 0m0.611s 00:04:39.725 19:05:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:39.725 ************************************ 00:04:39.725 END TEST default_locks_via_rpc 00:04:39.725 ************************************ 00:04:39.725 19:05:16 -- common/autotest_common.sh@10 -- # set +x 00:04:39.725 19:05:16 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:39.725 19:05:16 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:39.725 19:05:16 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:39.725 19:05:16 -- common/autotest_common.sh@10 -- # set +x 00:04:39.725 ************************************ 00:04:39.725 START TEST non_locking_app_on_locked_coremask 00:04:39.725 ************************************ 00:04:39.725 19:05:16 -- common/autotest_common.sh@1102 -- # non_locking_app_on_locked_coremask 00:04:39.725 19:05:16 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=57662 00:04:39.725 19:05:16 -- event/cpu_locks.sh@81 -- # waitforlisten 57662 /var/tmp/spdk.sock 00:04:39.725 19:05:16 -- common/autotest_common.sh@817 -- # '[' -z 57662 ']' 00:04:39.725 19:05:16 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:39.725 19:05:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.725 19:05:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:39.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.725 19:05:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.725 19:05:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:39.725 19:05:16 -- common/autotest_common.sh@10 -- # set +x 00:04:39.725 [2024-02-14 19:05:17.048202] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:39.725 [2024-02-14 19:05:17.048320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57662 ] 00:04:39.983 [2024-02-14 19:05:17.187570] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.984 [2024-02-14 19:05:17.309376] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:39.984 [2024-02-14 19:05:17.309609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.920 19:05:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:40.920 19:05:17 -- common/autotest_common.sh@850 -- # return 0 00:04:40.920 19:05:17 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=57689 00:04:40.920 19:05:17 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:40.920 19:05:17 -- event/cpu_locks.sh@85 -- # waitforlisten 57689 /var/tmp/spdk2.sock 00:04:40.920 19:05:17 -- common/autotest_common.sh@817 -- # '[' -z 57689 ']' 00:04:40.920 19:05:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:40.920 19:05:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:40.921 19:05:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:40.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:40.921 19:05:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:40.921 19:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:40.921 [2024-02-14 19:05:18.063933] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:40.921 [2024-02-14 19:05:18.064369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57689 ] 00:04:40.921 [2024-02-14 19:05:18.209806] app.c: 793:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:40.921 [2024-02-14 19:05:18.209876] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.180 [2024-02-14 19:05:18.463723] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:41.180 [2024-02-14 19:05:18.463894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.754 19:05:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:41.754 19:05:19 -- common/autotest_common.sh@850 -- # return 0 00:04:41.754 19:05:19 -- event/cpu_locks.sh@87 -- # locks_exist 57662 00:04:41.754 19:05:19 -- event/cpu_locks.sh@22 -- # lslocks -p 57662 00:04:41.754 19:05:19 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:42.690 19:05:19 -- event/cpu_locks.sh@89 -- # killprocess 57662 00:04:42.690 19:05:19 -- common/autotest_common.sh@924 -- # '[' -z 57662 ']' 00:04:42.690 19:05:19 -- common/autotest_common.sh@928 -- # kill -0 57662 00:04:42.690 19:05:19 -- common/autotest_common.sh@929 -- # uname 00:04:42.690 19:05:19 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:04:42.690 19:05:19 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 57662 00:04:42.690 killing process with pid 57662 00:04:42.690 19:05:19 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:04:42.690 19:05:19 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:04:42.690 19:05:19 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 57662' 00:04:42.690 19:05:19 -- common/autotest_common.sh@943 -- # kill 57662 00:04:42.690 19:05:19 -- common/autotest_common.sh@948 -- # wait 57662 00:04:44.069 19:05:21 -- event/cpu_locks.sh@90 -- # killprocess 57689 00:04:44.069 19:05:21 -- common/autotest_common.sh@924 -- # '[' -z 57689 ']' 00:04:44.069 19:05:21 -- common/autotest_common.sh@928 -- # kill -0 57689 00:04:44.069 19:05:21 -- common/autotest_common.sh@929 -- # uname 00:04:44.069 19:05:21 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:04:44.069 19:05:21 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 57689 00:04:44.069 killing process with pid 57689 00:04:44.069 19:05:21 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:04:44.069 19:05:21 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:04:44.069 19:05:21 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 57689' 00:04:44.069 19:05:21 -- common/autotest_common.sh@943 -- # kill 57689 00:04:44.069 19:05:21 -- common/autotest_common.sh@948 -- # wait 57689 00:04:45.446 00:04:45.446 real 0m5.496s 00:04:45.446 user 0m5.849s 00:04:45.446 sys 0m1.149s 00:04:45.446 19:05:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:45.446 ************************************ 00:04:45.446 END TEST non_locking_app_on_locked_coremask 00:04:45.446 ************************************ 00:04:45.446 19:05:22 -- common/autotest_common.sh@10 -- # set +x 00:04:45.446 19:05:22 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:45.446 19:05:22 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:45.446 19:05:22 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:45.446 19:05:22 -- common/autotest_common.sh@10 -- # set +x 00:04:45.446 ************************************ 00:04:45.446 START TEST locking_app_on_unlocked_coremask 00:04:45.446 ************************************ 00:04:45.446 19:05:22 -- common/autotest_common.sh@1102 -- # locking_app_on_unlocked_coremask 00:04:45.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.446 19:05:22 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=57790 00:04:45.446 19:05:22 -- event/cpu_locks.sh@99 -- # waitforlisten 57790 /var/tmp/spdk.sock 00:04:45.446 19:05:22 -- common/autotest_common.sh@817 -- # '[' -z 57790 ']' 00:04:45.446 19:05:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.446 19:05:22 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:45.446 19:05:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:45.446 19:05:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.446 19:05:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:45.446 19:05:22 -- common/autotest_common.sh@10 -- # set +x 00:04:45.446 [2024-02-14 19:05:22.589936] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:45.446 [2024-02-14 19:05:22.590085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57790 ] 00:04:45.446 [2024-02-14 19:05:22.725538] app.c: 793:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:45.446 [2024-02-14 19:05:22.725652] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.705 [2024-02-14 19:05:22.903266] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:45.705 [2024-02-14 19:05:22.903511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:46.272 19:05:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:46.272 19:05:23 -- common/autotest_common.sh@850 -- # return 0 00:04:46.272 19:05:23 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=57818 00:04:46.272 19:05:23 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:46.272 19:05:23 -- event/cpu_locks.sh@103 -- # waitforlisten 57818 /var/tmp/spdk2.sock 00:04:46.272 19:05:23 -- common/autotest_common.sh@817 -- # '[' -z 57818 ']' 00:04:46.272 19:05:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:46.272 19:05:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:46.272 19:05:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:46.272 19:05:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:46.272 19:05:23 -- common/autotest_common.sh@10 -- # set +x 00:04:46.272 [2024-02-14 19:05:23.644654] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:46.272 [2024-02-14 19:05:23.645100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57818 ] 00:04:46.531 [2024-02-14 19:05:23.790256] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.790 [2024-02-14 19:05:24.165356] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:46.790 [2024-02-14 19:05:24.165592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.165 19:05:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:48.165 19:05:25 -- common/autotest_common.sh@850 -- # return 0 00:04:48.165 19:05:25 -- event/cpu_locks.sh@105 -- # locks_exist 57818 00:04:48.165 19:05:25 -- event/cpu_locks.sh@22 -- # lslocks -p 57818 00:04:48.165 19:05:25 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:49.101 19:05:26 -- event/cpu_locks.sh@107 -- # killprocess 57790 00:04:49.101 19:05:26 -- common/autotest_common.sh@924 -- # '[' -z 57790 ']' 00:04:49.101 19:05:26 -- common/autotest_common.sh@928 -- # kill -0 57790 00:04:49.101 19:05:26 -- common/autotest_common.sh@929 -- # uname 00:04:49.101 19:05:26 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:04:49.101 19:05:26 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 57790 00:04:49.101 killing process with pid 57790 00:04:49.101 19:05:26 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:04:49.101 19:05:26 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:04:49.101 19:05:26 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 57790' 00:04:49.101 19:05:26 -- common/autotest_common.sh@943 -- # kill 57790 00:04:49.101 19:05:26 -- common/autotest_common.sh@948 -- # wait 57790 00:04:51.009 19:05:28 -- event/cpu_locks.sh@108 -- # killprocess 57818 00:04:51.009 19:05:28 -- common/autotest_common.sh@924 -- # '[' -z 57818 ']' 00:04:51.009 19:05:28 -- common/autotest_common.sh@928 -- # kill -0 57818 00:04:51.009 19:05:28 -- common/autotest_common.sh@929 -- # uname 00:04:51.009 19:05:28 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:04:51.009 19:05:28 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 57818 00:04:51.009 killing process with pid 57818 00:04:51.009 19:05:28 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:04:51.009 19:05:28 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:04:51.009 19:05:28 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 57818' 00:04:51.009 19:05:28 -- common/autotest_common.sh@943 -- # kill 57818 00:04:51.009 19:05:28 -- common/autotest_common.sh@948 -- # wait 57818 00:04:51.945 ************************************ 00:04:51.945 END TEST locking_app_on_unlocked_coremask 00:04:51.945 ************************************ 00:04:51.945 00:04:51.945 real 0m6.489s 00:04:51.945 user 0m6.703s 00:04:51.945 sys 0m1.639s 00:04:51.945 19:05:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:51.945 19:05:29 -- common/autotest_common.sh@10 -- # set +x 00:04:51.945 19:05:29 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:51.945 19:05:29 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:51.945 19:05:29 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:51.945 19:05:29 -- common/autotest_common.sh@10 -- # set +x 00:04:51.945 ************************************ 00:04:51.945 START TEST locking_app_on_locked_coremask 00:04:51.945 ************************************ 00:04:51.945 19:05:29 -- common/autotest_common.sh@1102 -- # locking_app_on_locked_coremask 00:04:51.945 19:05:29 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=57927 00:04:51.945 19:05:29 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:51.945 19:05:29 -- event/cpu_locks.sh@116 -- # waitforlisten 57927 /var/tmp/spdk.sock 00:04:51.945 19:05:29 -- common/autotest_common.sh@817 -- # '[' -z 57927 ']' 00:04:51.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.945 19:05:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.945 19:05:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:51.945 19:05:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.945 19:05:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:51.945 19:05:29 -- common/autotest_common.sh@10 -- # set +x 00:04:51.945 [2024-02-14 19:05:29.147136] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:51.945 [2024-02-14 19:05:29.147639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57927 ] 00:04:51.945 [2024-02-14 19:05:29.288762] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.203 [2024-02-14 19:05:29.460065] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:52.203 [2024-02-14 19:05:29.460253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.770 19:05:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:52.770 19:05:30 -- common/autotest_common.sh@850 -- # return 0 00:04:52.771 19:05:30 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=57955 00:04:52.771 19:05:30 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 57955 /var/tmp/spdk2.sock 00:04:52.771 19:05:30 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:52.771 19:05:30 -- common/autotest_common.sh@638 -- # local es=0 00:04:52.771 19:05:30 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 57955 /var/tmp/spdk2.sock 00:04:52.771 19:05:30 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:04:52.771 19:05:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:52.771 19:05:30 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:04:52.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:52.771 19:05:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:52.771 19:05:30 -- common/autotest_common.sh@641 -- # waitforlisten 57955 /var/tmp/spdk2.sock 00:04:52.771 19:05:30 -- common/autotest_common.sh@817 -- # '[' -z 57955 ']' 00:04:52.771 19:05:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:52.771 19:05:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:52.771 19:05:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:52.771 19:05:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:52.771 19:05:30 -- common/autotest_common.sh@10 -- # set +x 00:04:53.029 [2024-02-14 19:05:30.217394] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:53.029 [2024-02-14 19:05:30.217545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57955 ] 00:04:53.029 [2024-02-14 19:05:30.366050] app.c: 663:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 57927 has claimed it. 00:04:53.029 [2024-02-14 19:05:30.366190] app.c: 789:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:53.596 ERROR: process (pid: 57955) is no longer running 00:04:53.596 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (57955) - No such process 00:04:53.596 19:05:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:53.596 19:05:30 -- common/autotest_common.sh@850 -- # return 1 00:04:53.596 19:05:30 -- common/autotest_common.sh@641 -- # es=1 00:04:53.596 19:05:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:53.596 19:05:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:53.596 19:05:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:53.596 19:05:30 -- event/cpu_locks.sh@122 -- # locks_exist 57927 00:04:53.596 19:05:30 -- event/cpu_locks.sh@22 -- # lslocks -p 57927 00:04:53.596 19:05:30 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:54.163 19:05:31 -- event/cpu_locks.sh@124 -- # killprocess 57927 00:04:54.163 19:05:31 -- common/autotest_common.sh@924 -- # '[' -z 57927 ']' 00:04:54.163 19:05:31 -- common/autotest_common.sh@928 -- # kill -0 57927 00:04:54.163 19:05:31 -- common/autotest_common.sh@929 -- # uname 00:04:54.163 19:05:31 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:04:54.163 19:05:31 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 57927 00:04:54.163 killing process with pid 57927 00:04:54.163 19:05:31 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:04:54.163 19:05:31 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:04:54.163 19:05:31 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 57927' 00:04:54.163 19:05:31 -- common/autotest_common.sh@943 -- # kill 57927 00:04:54.163 19:05:31 -- common/autotest_common.sh@948 -- # wait 57927 00:04:54.732 ************************************ 00:04:54.732 END TEST locking_app_on_locked_coremask 00:04:54.732 ************************************ 00:04:54.732 00:04:54.732 real 0m3.015s 00:04:54.732 user 0m3.268s 00:04:54.732 sys 0m0.866s 00:04:54.732 19:05:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:54.732 19:05:32 -- common/autotest_common.sh@10 -- # set +x 00:04:54.732 19:05:32 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:54.732 19:05:32 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:54.732 19:05:32 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:54.732 19:05:32 -- common/autotest_common.sh@10 -- # set +x 00:04:54.991 ************************************ 00:04:54.991 START TEST locking_overlapped_coremask 00:04:54.991 ************************************ 00:04:54.991 19:05:32 -- common/autotest_common.sh@1102 -- # locking_overlapped_coremask 00:04:54.991 19:05:32 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58012 00:04:54.991 19:05:32 -- event/cpu_locks.sh@133 -- # waitforlisten 58012 /var/tmp/spdk.sock 00:04:54.991 19:05:32 -- common/autotest_common.sh@817 -- # '[' -z 58012 ']' 00:04:54.991 19:05:32 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:04:54.991 19:05:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.991 19:05:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:54.991 19:05:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.991 19:05:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:54.991 19:05:32 -- common/autotest_common.sh@10 -- # set +x 00:04:54.991 [2024-02-14 19:05:32.221851] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:54.991 [2024-02-14 19:05:32.221977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58012 ] 00:04:54.991 [2024-02-14 19:05:32.362772] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:55.251 [2024-02-14 19:05:32.564991] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:55.251 [2024-02-14 19:05:32.565796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.251 [2024-02-14 19:05:32.565895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:55.251 [2024-02-14 19:05:32.565906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.819 19:05:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:55.819 19:05:33 -- common/autotest_common.sh@850 -- # return 0 00:04:55.819 19:05:33 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58042 00:04:55.819 19:05:33 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:55.819 19:05:33 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58042 /var/tmp/spdk2.sock 00:04:55.819 19:05:33 -- common/autotest_common.sh@638 -- # local es=0 00:04:55.819 19:05:33 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 58042 /var/tmp/spdk2.sock 00:04:55.819 19:05:33 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:04:55.819 19:05:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:55.819 19:05:33 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:04:55.819 19:05:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:55.819 19:05:33 -- common/autotest_common.sh@641 -- # waitforlisten 58042 /var/tmp/spdk2.sock 00:04:55.819 19:05:33 -- common/autotest_common.sh@817 -- # '[' -z 58042 ']' 00:04:55.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:55.819 19:05:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:55.819 19:05:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:55.819 19:05:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:55.819 19:05:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:55.819 19:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:56.078 [2024-02-14 19:05:33.271954] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:56.078 [2024-02-14 19:05:33.272103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58042 ] 00:04:56.078 [2024-02-14 19:05:33.418725] app.c: 663:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58012 has claimed it. 00:04:56.078 [2024-02-14 19:05:33.418966] app.c: 789:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:56.646 ERROR: process (pid: 58042) is no longer running 00:04:56.646 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (58042) - No such process 00:04:56.646 19:05:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:56.646 19:05:34 -- common/autotest_common.sh@850 -- # return 1 00:04:56.646 19:05:34 -- common/autotest_common.sh@641 -- # es=1 00:04:56.646 19:05:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:56.646 19:05:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:56.646 19:05:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:56.646 19:05:34 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:56.646 19:05:34 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:56.646 19:05:34 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:56.646 19:05:34 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:56.646 19:05:34 -- event/cpu_locks.sh@141 -- # killprocess 58012 00:04:56.646 19:05:34 -- common/autotest_common.sh@924 -- # '[' -z 58012 ']' 00:04:56.646 19:05:34 -- common/autotest_common.sh@928 -- # kill -0 58012 00:04:56.646 19:05:34 -- common/autotest_common.sh@929 -- # uname 00:04:56.646 19:05:34 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:04:56.646 19:05:34 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 58012 00:04:56.646 19:05:34 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:04:56.646 19:05:34 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:04:56.646 19:05:34 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 58012' 00:04:56.646 killing process with pid 58012 00:04:56.646 19:05:34 -- common/autotest_common.sh@943 -- # kill 58012 00:04:56.646 19:05:34 -- common/autotest_common.sh@948 -- # wait 58012 00:04:58.036 00:04:58.036 real 0m3.059s 00:04:58.036 user 0m7.920s 00:04:58.036 sys 0m0.791s 00:04:58.036 ************************************ 00:04:58.036 END TEST locking_overlapped_coremask 00:04:58.036 ************************************ 00:04:58.036 19:05:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:58.036 19:05:35 -- common/autotest_common.sh@10 -- # set +x 00:04:58.036 19:05:35 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:58.036 19:05:35 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:58.036 19:05:35 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:58.036 19:05:35 -- common/autotest_common.sh@10 -- # set +x 00:04:58.036 ************************************ 00:04:58.036 START TEST locking_overlapped_coremask_via_rpc 00:04:58.036 ************************************ 00:04:58.036 19:05:35 -- common/autotest_common.sh@1102 -- # locking_overlapped_coremask_via_rpc 00:04:58.036 19:05:35 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58099 00:04:58.036 19:05:35 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:58.036 19:05:35 -- event/cpu_locks.sh@149 -- # waitforlisten 58099 /var/tmp/spdk.sock 00:04:58.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.036 19:05:35 -- common/autotest_common.sh@817 -- # '[' -z 58099 ']' 00:04:58.036 19:05:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.036 19:05:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:58.036 19:05:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.036 19:05:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:58.036 19:05:35 -- common/autotest_common.sh@10 -- # set +x 00:04:58.036 [2024-02-14 19:05:35.338938] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:58.036 [2024-02-14 19:05:35.339057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58099 ] 00:04:58.295 [2024-02-14 19:05:35.480198] app.c: 793:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:58.295 [2024-02-14 19:05:35.480315] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:58.295 [2024-02-14 19:05:35.649445] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:58.295 [2024-02-14 19:05:35.650195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.295 [2024-02-14 19:05:35.650282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.295 [2024-02-14 19:05:35.650286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.233 19:05:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:59.233 19:05:36 -- common/autotest_common.sh@850 -- # return 0 00:04:59.233 19:05:36 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58129 00:04:59.233 19:05:36 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:59.233 19:05:36 -- event/cpu_locks.sh@153 -- # waitforlisten 58129 /var/tmp/spdk2.sock 00:04:59.233 19:05:36 -- common/autotest_common.sh@817 -- # '[' -z 58129 ']' 00:04:59.233 19:05:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:59.233 19:05:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:59.233 19:05:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:59.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:59.233 19:05:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:59.233 19:05:36 -- common/autotest_common.sh@10 -- # set +x 00:04:59.233 [2024-02-14 19:05:36.409044] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:59.233 [2024-02-14 19:05:36.409224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58129 ] 00:04:59.233 [2024-02-14 19:05:36.561905] app.c: 793:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:59.234 [2024-02-14 19:05:36.562034] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:59.803 [2024-02-14 19:05:37.102326] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:59.803 [2024-02-14 19:05:37.103135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:59.803 [2024-02-14 19:05:37.103221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:04:59.803 [2024-02-14 19:05:37.103238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:01.708 19:05:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:01.708 19:05:38 -- common/autotest_common.sh@850 -- # return 0 00:05:01.708 19:05:38 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:01.708 19:05:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:01.708 19:05:38 -- common/autotest_common.sh@10 -- # set +x 00:05:01.708 19:05:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:01.708 19:05:38 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:01.708 19:05:38 -- common/autotest_common.sh@638 -- # local es=0 00:05:01.708 19:05:38 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:01.708 19:05:38 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:01.708 19:05:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:01.708 19:05:38 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:01.708 19:05:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:01.708 19:05:38 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:01.708 19:05:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:01.708 19:05:38 -- common/autotest_common.sh@10 -- # set +x 00:05:01.708 [2024-02-14 19:05:38.968821] app.c: 663:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58099 has claimed it. 00:05:01.708 2024/02/14 19:05:38 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:05:01.708 request: 00:05:01.708 { 00:05:01.708 "method": "framework_enable_cpumask_locks", 00:05:01.708 "params": {} 00:05:01.708 } 00:05:01.708 Got JSON-RPC error response 00:05:01.708 GoRPCClient: error on JSON-RPC call 00:05:01.708 19:05:38 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:01.708 19:05:38 -- common/autotest_common.sh@641 -- # es=1 00:05:01.708 19:05:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:01.708 19:05:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:01.708 19:05:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:01.708 19:05:38 -- event/cpu_locks.sh@158 -- # waitforlisten 58099 /var/tmp/spdk.sock 00:05:01.708 19:05:38 -- common/autotest_common.sh@817 -- # '[' -z 58099 ']' 00:05:01.708 19:05:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.708 19:05:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:01.708 19:05:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.708 19:05:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:01.708 19:05:38 -- common/autotest_common.sh@10 -- # set +x 00:05:01.968 19:05:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:01.968 19:05:39 -- common/autotest_common.sh@850 -- # return 0 00:05:01.968 19:05:39 -- event/cpu_locks.sh@159 -- # waitforlisten 58129 /var/tmp/spdk2.sock 00:05:01.968 19:05:39 -- common/autotest_common.sh@817 -- # '[' -z 58129 ']' 00:05:01.968 19:05:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:01.968 19:05:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:01.968 19:05:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:01.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:01.968 19:05:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:01.968 19:05:39 -- common/autotest_common.sh@10 -- # set +x 00:05:02.227 19:05:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:02.227 19:05:39 -- common/autotest_common.sh@850 -- # return 0 00:05:02.227 ************************************ 00:05:02.227 END TEST locking_overlapped_coremask_via_rpc 00:05:02.227 ************************************ 00:05:02.227 19:05:39 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:02.227 19:05:39 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:02.227 19:05:39 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:02.227 19:05:39 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:02.227 00:05:02.227 real 0m4.289s 00:05:02.227 user 0m1.886s 00:05:02.227 sys 0m0.318s 00:05:02.227 19:05:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:02.227 19:05:39 -- common/autotest_common.sh@10 -- # set +x 00:05:02.227 19:05:39 -- event/cpu_locks.sh@174 -- # cleanup 00:05:02.227 19:05:39 -- event/cpu_locks.sh@15 -- # [[ -z 58099 ]] 00:05:02.227 19:05:39 -- event/cpu_locks.sh@15 -- # killprocess 58099 00:05:02.227 19:05:39 -- common/autotest_common.sh@924 -- # '[' -z 58099 ']' 00:05:02.227 19:05:39 -- common/autotest_common.sh@928 -- # kill -0 58099 00:05:02.227 19:05:39 -- common/autotest_common.sh@929 -- # uname 00:05:02.227 19:05:39 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:02.227 19:05:39 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 58099 00:05:02.227 killing process with pid 58099 00:05:02.227 19:05:39 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:02.227 19:05:39 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:02.227 19:05:39 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 58099' 00:05:02.227 19:05:39 -- common/autotest_common.sh@943 -- # kill 58099 00:05:02.227 19:05:39 -- common/autotest_common.sh@948 -- # wait 58099 00:05:03.606 19:05:40 -- event/cpu_locks.sh@16 -- # [[ -z 58129 ]] 00:05:03.606 19:05:40 -- event/cpu_locks.sh@16 -- # killprocess 58129 00:05:03.606 19:05:40 -- common/autotest_common.sh@924 -- # '[' -z 58129 ']' 00:05:03.606 19:05:40 -- common/autotest_common.sh@928 -- # kill -0 58129 00:05:03.606 19:05:40 -- common/autotest_common.sh@929 -- # uname 00:05:03.606 19:05:40 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:03.606 19:05:40 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 58129 00:05:03.606 killing process with pid 58129 00:05:03.606 19:05:40 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:05:03.606 19:05:40 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:05:03.606 19:05:40 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 58129' 00:05:03.606 19:05:40 -- common/autotest_common.sh@943 -- # kill 58129 00:05:03.606 19:05:40 -- common/autotest_common.sh@948 -- # wait 58129 00:05:04.990 19:05:42 -- event/cpu_locks.sh@18 -- # rm -f 00:05:04.990 Process with pid 58099 is not found 00:05:04.990 Process with pid 58129 is not found 00:05:04.990 19:05:42 -- event/cpu_locks.sh@1 -- # cleanup 00:05:04.990 19:05:42 -- event/cpu_locks.sh@15 -- # [[ -z 58099 ]] 00:05:04.990 19:05:42 -- event/cpu_locks.sh@15 -- # killprocess 58099 00:05:04.990 19:05:42 -- common/autotest_common.sh@924 -- # '[' -z 58099 ']' 00:05:04.990 19:05:42 -- common/autotest_common.sh@928 -- # kill -0 58099 00:05:04.990 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 928: kill: (58099) - No such process 00:05:04.990 19:05:42 -- common/autotest_common.sh@951 -- # echo 'Process with pid 58099 is not found' 00:05:04.990 19:05:42 -- event/cpu_locks.sh@16 -- # [[ -z 58129 ]] 00:05:04.990 19:05:42 -- event/cpu_locks.sh@16 -- # killprocess 58129 00:05:04.990 19:05:42 -- common/autotest_common.sh@924 -- # '[' -z 58129 ']' 00:05:04.990 19:05:42 -- common/autotest_common.sh@928 -- # kill -0 58129 00:05:04.990 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 928: kill: (58129) - No such process 00:05:04.990 19:05:42 -- common/autotest_common.sh@951 -- # echo 'Process with pid 58129 is not found' 00:05:04.990 19:05:42 -- event/cpu_locks.sh@18 -- # rm -f 00:05:04.990 ************************************ 00:05:04.990 END TEST cpu_locks 00:05:04.990 ************************************ 00:05:04.990 00:05:04.990 real 0m29.977s 00:05:04.990 user 0m55.950s 00:05:04.990 sys 0m7.699s 00:05:04.990 19:05:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:04.990 19:05:42 -- common/autotest_common.sh@10 -- # set +x 00:05:05.250 ************************************ 00:05:05.250 END TEST event 00:05:05.250 ************************************ 00:05:05.250 00:05:05.250 real 1m0.871s 00:05:05.250 user 2m0.152s 00:05:05.250 sys 0m12.119s 00:05:05.250 19:05:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:05.250 19:05:42 -- common/autotest_common.sh@10 -- # set +x 00:05:05.250 19:05:42 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:05.250 19:05:42 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:05.250 19:05:42 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:05.250 19:05:42 -- common/autotest_common.sh@10 -- # set +x 00:05:05.250 ************************************ 00:05:05.250 START TEST thread 00:05:05.250 ************************************ 00:05:05.250 19:05:42 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:05.250 * Looking for test storage... 00:05:05.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:05.250 19:05:42 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:05.250 19:05:42 -- common/autotest_common.sh@1075 -- # '[' 8 -le 1 ']' 00:05:05.250 19:05:42 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:05.250 19:05:42 -- common/autotest_common.sh@10 -- # set +x 00:05:05.250 ************************************ 00:05:05.250 START TEST thread_poller_perf 00:05:05.250 ************************************ 00:05:05.250 19:05:42 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:05.250 [2024-02-14 19:05:42.610120] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:05.250 [2024-02-14 19:05:42.611675] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58331 ] 00:05:05.510 [2024-02-14 19:05:42.751861] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.769 [2024-02-14 19:05:42.942244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.769 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:07.148 ====================================== 00:05:07.148 busy:2214388488 (cyc) 00:05:07.148 total_run_count: 291000 00:05:07.148 tsc_hz: 2200000000 (cyc) 00:05:07.148 ====================================== 00:05:07.148 poller_cost: 7609 (cyc), 3458 (nsec) 00:05:07.148 00:05:07.148 real 0m1.611s 00:05:07.148 user 0m1.400s 00:05:07.148 sys 0m0.092s 00:05:07.148 19:05:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.148 19:05:44 -- common/autotest_common.sh@10 -- # set +x 00:05:07.148 ************************************ 00:05:07.148 END TEST thread_poller_perf 00:05:07.148 ************************************ 00:05:07.148 19:05:44 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:07.148 19:05:44 -- common/autotest_common.sh@1075 -- # '[' 8 -le 1 ']' 00:05:07.148 19:05:44 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:07.148 19:05:44 -- common/autotest_common.sh@10 -- # set +x 00:05:07.148 ************************************ 00:05:07.148 START TEST thread_poller_perf 00:05:07.148 ************************************ 00:05:07.148 19:05:44 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:07.148 [2024-02-14 19:05:44.275889] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:07.148 [2024-02-14 19:05:44.276025] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58372 ] 00:05:07.148 [2024-02-14 19:05:44.414526] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.407 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:07.407 [2024-02-14 19:05:44.612642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.785 ====================================== 00:05:08.785 busy:2202971568 (cyc) 00:05:08.785 total_run_count: 4188000 00:05:08.785 tsc_hz: 2200000000 (cyc) 00:05:08.785 ====================================== 00:05:08.785 poller_cost: 526 (cyc), 239 (nsec) 00:05:08.785 ************************************ 00:05:08.785 END TEST thread_poller_perf 00:05:08.785 ************************************ 00:05:08.785 00:05:08.785 real 0m1.542s 00:05:08.785 user 0m1.326s 00:05:08.785 sys 0m0.105s 00:05:08.785 19:05:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:08.785 19:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:08.785 19:05:45 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:08.785 ************************************ 00:05:08.785 END TEST thread 00:05:08.785 ************************************ 00:05:08.785 00:05:08.785 real 0m3.355s 00:05:08.785 user 0m2.799s 00:05:08.785 sys 0m0.318s 00:05:08.785 19:05:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:08.785 19:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:08.785 19:05:45 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:08.785 19:05:45 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:08.785 19:05:45 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:08.785 19:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:08.785 ************************************ 00:05:08.785 START TEST accel 00:05:08.785 ************************************ 00:05:08.785 19:05:45 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:08.785 * Looking for test storage... 00:05:08.785 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:08.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.785 19:05:45 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:08.785 19:05:45 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:08.785 19:05:45 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:08.785 19:05:45 -- accel/accel.sh@59 -- # spdk_tgt_pid=58446 00:05:08.785 19:05:45 -- accel/accel.sh@60 -- # waitforlisten 58446 00:05:08.785 19:05:45 -- common/autotest_common.sh@817 -- # '[' -z 58446 ']' 00:05:08.785 19:05:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.785 19:05:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:08.785 19:05:45 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:08.785 19:05:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.785 19:05:45 -- accel/accel.sh@58 -- # build_accel_config 00:05:08.785 19:05:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:08.785 19:05:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:08.785 19:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:08.785 19:05:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:08.785 19:05:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:08.785 19:05:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:08.785 19:05:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:08.785 19:05:45 -- accel/accel.sh@41 -- # local IFS=, 00:05:08.785 19:05:45 -- accel/accel.sh@42 -- # jq -r . 00:05:08.785 [2024-02-14 19:05:46.060144] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:08.785 [2024-02-14 19:05:46.060588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58446 ] 00:05:08.785 [2024-02-14 19:05:46.199446] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.044 [2024-02-14 19:05:46.386841] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:09.044 [2024-02-14 19:05:46.387418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.044 [2024-02-14 19:05:46.387522] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:09.981 19:05:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:09.981 19:05:47 -- common/autotest_common.sh@850 -- # return 0 00:05:09.981 19:05:47 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:09.981 19:05:47 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:09.981 19:05:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:09.981 19:05:47 -- common/autotest_common.sh@10 -- # set +x 00:05:09.981 19:05:47 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:09.981 19:05:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:09.981 19:05:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.981 19:05:47 -- accel/accel.sh@64 -- # IFS== 00:05:09.981 19:05:47 -- accel/accel.sh@64 -- # read -r opc module 00:05:09.981 19:05:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:09.981 19:05:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.981 19:05:47 -- accel/accel.sh@64 -- # IFS== 00:05:09.981 19:05:47 -- accel/accel.sh@64 -- # read -r opc module 00:05:09.982 19:05:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:09.982 19:05:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.982 19:05:47 -- accel/accel.sh@64 -- # IFS== 00:05:09.982 19:05:47 -- accel/accel.sh@64 -- # read -r opc module 00:05:09.982 19:05:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:09.982 19:05:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.982 19:05:47 -- accel/accel.sh@64 -- # IFS== 00:05:09.982 19:05:47 -- accel/accel.sh@64 -- # read -r opc module 00:05:09.982 19:05:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:09.982 19:05:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.982 19:05:47 -- accel/accel.sh@64 -- # IFS== 00:05:09.982 19:05:47 -- accel/accel.sh@64 -- # read -r opc module 00:05:09.982 19:05:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:09.982 19:05:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.982 19:05:47 -- accel/accel.sh@64 -- # IFS== 00:05:09.982 19:05:47 -- accel/accel.sh@64 -- # read -r opc module 00:05:09.982 19:05:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:09.982 19:05:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.982 19:05:47 -- accel/accel.sh@64 -- # IFS== 00:05:09.982 19:05:47 -- accel/accel.sh@64 -- # read -r opc module 00:05:09.982 19:05:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:09.982 19:05:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.982 19:05:47 -- accel/accel.sh@64 -- # IFS== 00:05:09.982 19:05:47 -- accel/accel.sh@64 -- # read -r opc module 00:05:09.982 19:05:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:09.982 19:05:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.982 19:05:47 -- accel/accel.sh@64 -- # IFS== 00:05:09.982 19:05:47 -- accel/accel.sh@64 -- # read -r opc module 00:05:09.982 19:05:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:09.982 19:05:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.982 19:05:47 -- accel/accel.sh@64 -- # IFS== 00:05:09.982 19:05:47 -- accel/accel.sh@64 -- # read -r opc module 00:05:09.982 19:05:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:09.982 19:05:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.982 19:05:47 -- accel/accel.sh@64 -- # IFS== 00:05:09.982 19:05:47 -- accel/accel.sh@64 -- # read -r opc module 00:05:09.982 19:05:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:09.982 19:05:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.982 19:05:47 -- accel/accel.sh@64 -- # IFS== 00:05:09.982 19:05:47 -- accel/accel.sh@64 -- # read -r opc module 00:05:09.982 19:05:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:09.982 19:05:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.982 19:05:47 -- accel/accel.sh@64 -- # IFS== 00:05:09.982 19:05:47 -- accel/accel.sh@64 -- # read -r opc module 00:05:09.982 19:05:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:09.982 19:05:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.982 19:05:47 -- accel/accel.sh@64 -- # IFS== 00:05:09.982 19:05:47 -- accel/accel.sh@64 -- # read -r opc module 00:05:09.982 19:05:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:09.982 19:05:47 -- accel/accel.sh@67 -- # killprocess 58446 00:05:09.982 19:05:47 -- common/autotest_common.sh@924 -- # '[' -z 58446 ']' 00:05:09.982 19:05:47 -- common/autotest_common.sh@928 -- # kill -0 58446 00:05:09.982 19:05:47 -- common/autotest_common.sh@929 -- # uname 00:05:09.982 19:05:47 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:09.982 19:05:47 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 58446 00:05:09.982 19:05:47 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:09.982 19:05:47 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:09.982 19:05:47 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 58446' 00:05:09.982 killing process with pid 58446 00:05:09.982 19:05:47 -- common/autotest_common.sh@943 -- # kill 58446 00:05:09.982 [2024-02-14 19:05:47.201312] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk 19:05:47 -- common/autotest_common.sh@948 -- # wait 58446 00:05:09.982 _subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:11.360 19:05:48 -- accel/accel.sh@68 -- # trap - ERR 00:05:11.360 19:05:48 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:11.361 19:05:48 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:05:11.361 19:05:48 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:11.361 19:05:48 -- common/autotest_common.sh@10 -- # set +x 00:05:11.361 19:05:48 -- common/autotest_common.sh@1102 -- # accel_perf -h 00:05:11.361 19:05:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:11.361 19:05:48 -- accel/accel.sh@12 -- # build_accel_config 00:05:11.361 19:05:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:11.361 19:05:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.361 19:05:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.361 19:05:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:11.361 19:05:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:11.361 19:05:48 -- accel/accel.sh@41 -- # local IFS=, 00:05:11.361 19:05:48 -- accel/accel.sh@42 -- # jq -r . 00:05:11.361 19:05:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.361 19:05:48 -- common/autotest_common.sh@10 -- # set +x 00:05:11.361 19:05:48 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:11.361 19:05:48 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:05:11.361 19:05:48 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:11.361 19:05:48 -- common/autotest_common.sh@10 -- # set +x 00:05:11.361 ************************************ 00:05:11.361 START TEST accel_missing_filename 00:05:11.361 ************************************ 00:05:11.361 19:05:48 -- common/autotest_common.sh@1102 -- # NOT accel_perf -t 1 -w compress 00:05:11.361 19:05:48 -- common/autotest_common.sh@638 -- # local es=0 00:05:11.361 19:05:48 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:11.361 19:05:48 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:11.361 19:05:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:11.361 19:05:48 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:11.361 19:05:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:11.361 19:05:48 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:05:11.361 19:05:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:11.361 19:05:48 -- accel/accel.sh@12 -- # build_accel_config 00:05:11.361 19:05:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:11.361 19:05:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.361 19:05:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.361 19:05:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:11.361 19:05:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:11.361 19:05:48 -- accel/accel.sh@41 -- # local IFS=, 00:05:11.361 19:05:48 -- accel/accel.sh@42 -- # jq -r . 00:05:11.361 [2024-02-14 19:05:48.553140] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:11.361 [2024-02-14 19:05:48.553295] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58526 ] 00:05:11.361 [2024-02-14 19:05:48.689875] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.620 [2024-02-14 19:05:48.939038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.620 [2024-02-14 19:05:48.939213] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:11.880 [2024-02-14 19:05:49.084678] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:11.880 [2024-02-14 19:05:49.084791] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:12.139 [2024-02-14 19:05:49.369466] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:12.398 A filename is required. 00:05:12.398 19:05:49 -- common/autotest_common.sh@641 -- # es=234 00:05:12.398 19:05:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:12.398 19:05:49 -- common/autotest_common.sh@650 -- # es=106 00:05:12.398 19:05:49 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:12.398 19:05:49 -- common/autotest_common.sh@658 -- # es=1 00:05:12.398 19:05:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:12.398 00:05:12.398 real 0m1.072s 00:05:12.398 user 0m0.730s 00:05:12.398 sys 0m0.281s 00:05:12.398 19:05:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:12.398 ************************************ 00:05:12.398 19:05:49 -- common/autotest_common.sh@10 -- # set +x 00:05:12.398 END TEST accel_missing_filename 00:05:12.398 ************************************ 00:05:12.398 19:05:49 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:12.398 19:05:49 -- common/autotest_common.sh@1075 -- # '[' 10 -le 1 ']' 00:05:12.398 19:05:49 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:12.398 19:05:49 -- common/autotest_common.sh@10 -- # set +x 00:05:12.398 ************************************ 00:05:12.398 START TEST accel_compress_verify 00:05:12.398 ************************************ 00:05:12.398 19:05:49 -- common/autotest_common.sh@1102 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:12.398 19:05:49 -- common/autotest_common.sh@638 -- # local es=0 00:05:12.398 19:05:49 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:12.398 19:05:49 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:12.398 19:05:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:12.398 19:05:49 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:12.398 19:05:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:12.398 19:05:49 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:12.398 19:05:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:12.398 19:05:49 -- accel/accel.sh@12 -- # build_accel_config 00:05:12.398 19:05:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:12.398 19:05:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.398 19:05:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.398 19:05:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:12.398 19:05:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:12.398 19:05:49 -- accel/accel.sh@41 -- # local IFS=, 00:05:12.398 19:05:49 -- accel/accel.sh@42 -- # jq -r . 00:05:12.398 [2024-02-14 19:05:49.689420] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:12.398 [2024-02-14 19:05:49.689606] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58556 ] 00:05:12.656 [2024-02-14 19:05:49.828509] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.656 [2024-02-14 19:05:50.035031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.656 [2024-02-14 19:05:50.035223] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:12.915 [2024-02-14 19:05:50.200994] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:12.915 [2024-02-14 19:05:50.201106] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:13.174 [2024-02-14 19:05:50.532527] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:13.432 00:05:13.432 Compression does not support the verify option, aborting. 00:05:13.432 19:05:50 -- common/autotest_common.sh@641 -- # es=161 00:05:13.432 19:05:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:13.432 19:05:50 -- common/autotest_common.sh@650 -- # es=33 00:05:13.432 19:05:50 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:13.432 19:05:50 -- common/autotest_common.sh@658 -- # es=1 00:05:13.432 ************************************ 00:05:13.432 END TEST accel_compress_verify 00:05:13.432 ************************************ 00:05:13.432 19:05:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:13.432 00:05:13.432 real 0m1.127s 00:05:13.432 user 0m0.775s 00:05:13.432 sys 0m0.287s 00:05:13.432 19:05:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.432 19:05:50 -- common/autotest_common.sh@10 -- # set +x 00:05:13.432 19:05:50 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:13.432 19:05:50 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:05:13.432 19:05:50 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:13.432 19:05:50 -- common/autotest_common.sh@10 -- # set +x 00:05:13.432 ************************************ 00:05:13.432 START TEST accel_wrong_workload 00:05:13.432 ************************************ 00:05:13.432 19:05:50 -- common/autotest_common.sh@1102 -- # NOT accel_perf -t 1 -w foobar 00:05:13.432 19:05:50 -- common/autotest_common.sh@638 -- # local es=0 00:05:13.691 19:05:50 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:13.691 19:05:50 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:13.691 19:05:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:13.691 19:05:50 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:13.691 19:05:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:13.691 19:05:50 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:05:13.691 19:05:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:13.691 19:05:50 -- accel/accel.sh@12 -- # build_accel_config 00:05:13.691 19:05:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:13.691 19:05:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:13.691 19:05:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:13.691 19:05:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:13.691 19:05:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:13.691 19:05:50 -- accel/accel.sh@41 -- # local IFS=, 00:05:13.691 19:05:50 -- accel/accel.sh@42 -- # jq -r . 00:05:13.691 Unsupported workload type: foobar 00:05:13.691 [2024-02-14 19:05:50.878707] app.c:1290:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:13.691 accel_perf options: 00:05:13.691 [-h help message] 00:05:13.691 [-q queue depth per core] 00:05:13.691 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:13.691 [-T number of threads per core 00:05:13.691 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:13.691 [-t time in seconds] 00:05:13.691 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:13.691 [ dif_verify, , dif_generate, dif_generate_copy 00:05:13.691 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:13.691 [-l for compress/decompress workloads, name of uncompressed input file 00:05:13.691 [-S for crc32c workload, use this seed value (default 0) 00:05:13.691 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:13.691 [-f for fill workload, use this BYTE value (default 255) 00:05:13.691 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:13.691 [-y verify result if this switch is on] 00:05:13.691 [-a tasks to allocate per core (default: same value as -q)] 00:05:13.691 Can be used to spread operations across a wider range of memory. 00:05:13.691 19:05:50 -- common/autotest_common.sh@641 -- # es=1 00:05:13.691 ************************************ 00:05:13.691 END TEST accel_wrong_workload 00:05:13.691 ************************************ 00:05:13.691 19:05:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:13.691 19:05:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:13.691 19:05:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:13.691 00:05:13.691 real 0m0.038s 00:05:13.691 user 0m0.018s 00:05:13.691 sys 0m0.018s 00:05:13.691 19:05:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.691 19:05:50 -- common/autotest_common.sh@10 -- # set +x 00:05:13.692 19:05:50 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:13.692 19:05:50 -- common/autotest_common.sh@1075 -- # '[' 10 -le 1 ']' 00:05:13.692 19:05:50 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:13.692 19:05:50 -- common/autotest_common.sh@10 -- # set +x 00:05:13.692 ************************************ 00:05:13.692 START TEST accel_negative_buffers 00:05:13.692 ************************************ 00:05:13.692 19:05:50 -- common/autotest_common.sh@1102 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:13.692 19:05:50 -- common/autotest_common.sh@638 -- # local es=0 00:05:13.692 19:05:50 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:13.692 19:05:50 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:13.692 19:05:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:13.692 19:05:50 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:13.692 19:05:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:13.692 19:05:50 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:05:13.692 19:05:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:13.692 19:05:50 -- accel/accel.sh@12 -- # build_accel_config 00:05:13.692 19:05:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:13.692 19:05:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:13.692 19:05:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:13.692 19:05:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:13.692 19:05:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:13.692 19:05:50 -- accel/accel.sh@41 -- # local IFS=, 00:05:13.692 19:05:50 -- accel/accel.sh@42 -- # jq -r . 00:05:13.692 -x option must be non-negative. 00:05:13.692 [2024-02-14 19:05:50.967821] app.c:1290:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:13.692 accel_perf options: 00:05:13.692 [-h help message] 00:05:13.692 [-q queue depth per core] 00:05:13.692 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:13.692 [-T number of threads per core 00:05:13.692 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:13.692 [-t time in seconds] 00:05:13.692 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:13.692 [ dif_verify, , dif_generate, dif_generate_copy 00:05:13.692 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:13.692 [-l for compress/decompress workloads, name of uncompressed input file 00:05:13.692 [-S for crc32c workload, use this seed value (default 0) 00:05:13.692 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:13.692 [-f for fill workload, use this BYTE value (default 255) 00:05:13.692 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:13.692 [-y verify result if this switch is on] 00:05:13.692 [-a tasks to allocate per core (default: same value as -q)] 00:05:13.692 Can be used to spread operations across a wider range of memory. 00:05:13.692 19:05:50 -- common/autotest_common.sh@641 -- # es=1 00:05:13.692 19:05:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:13.692 19:05:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:13.692 19:05:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:13.692 00:05:13.692 real 0m0.038s 00:05:13.692 user 0m0.022s 00:05:13.692 sys 0m0.013s 00:05:13.692 19:05:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.692 19:05:50 -- common/autotest_common.sh@10 -- # set +x 00:05:13.692 ************************************ 00:05:13.692 END TEST accel_negative_buffers 00:05:13.692 ************************************ 00:05:13.692 19:05:51 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:13.692 19:05:51 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:05:13.692 19:05:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:13.692 19:05:51 -- common/autotest_common.sh@10 -- # set +x 00:05:13.692 ************************************ 00:05:13.692 START TEST accel_crc32c 00:05:13.692 ************************************ 00:05:13.692 19:05:51 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:13.692 19:05:51 -- accel/accel.sh@16 -- # local accel_opc 00:05:13.692 19:05:51 -- accel/accel.sh@17 -- # local accel_module 00:05:13.692 19:05:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:13.692 19:05:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:13.692 19:05:51 -- accel/accel.sh@12 -- # build_accel_config 00:05:13.692 19:05:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:13.692 19:05:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:13.692 19:05:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:13.692 19:05:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:13.692 19:05:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:13.692 19:05:51 -- accel/accel.sh@41 -- # local IFS=, 00:05:13.692 19:05:51 -- accel/accel.sh@42 -- # jq -r . 00:05:13.692 [2024-02-14 19:05:51.067104] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:13.692 [2024-02-14 19:05:51.067237] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58626 ] 00:05:13.950 [2024-02-14 19:05:51.208136] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.208 [2024-02-14 19:05:51.371439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.208 [2024-02-14 19:05:51.371594] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:15.143 [2024-02-14 19:05:52.462455] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:15.711 19:05:53 -- accel/accel.sh@18 -- # out=' 00:05:15.711 SPDK Configuration: 00:05:15.711 Core mask: 0x1 00:05:15.711 00:05:15.711 Accel Perf Configuration: 00:05:15.711 Workload Type: crc32c 00:05:15.711 CRC-32C seed: 32 00:05:15.711 Transfer size: 4096 bytes 00:05:15.711 Vector count 1 00:05:15.711 Module: software 00:05:15.711 Queue depth: 32 00:05:15.711 Allocate depth: 32 00:05:15.711 # threads/core: 1 00:05:15.711 Run time: 1 seconds 00:05:15.711 Verify: Yes 00:05:15.711 00:05:15.711 Running for 1 seconds... 00:05:15.711 00:05:15.711 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:15.711 ------------------------------------------------------------------------------------ 00:05:15.711 0,0 469760/s 1835 MiB/s 0 0 00:05:15.711 ==================================================================================== 00:05:15.711 Total 469760/s 1835 MiB/s 0 0' 00:05:15.711 19:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:15.711 19:05:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:15.711 19:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:15.711 19:05:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:15.711 19:05:53 -- accel/accel.sh@12 -- # build_accel_config 00:05:15.711 19:05:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:15.711 19:05:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.711 19:05:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.711 19:05:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:15.711 19:05:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:15.711 19:05:53 -- accel/accel.sh@41 -- # local IFS=, 00:05:15.711 19:05:53 -- accel/accel.sh@42 -- # jq -r . 00:05:15.711 [2024-02-14 19:05:53.040513] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:15.711 [2024-02-14 19:05:53.040643] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58651 ] 00:05:15.970 [2024-02-14 19:05:53.176763] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.970 [2024-02-14 19:05:53.344503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.970 [2024-02-14 19:05:53.344620] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:16.229 19:05:53 -- accel/accel.sh@21 -- # val= 00:05:16.229 19:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:16.229 19:05:53 -- accel/accel.sh@21 -- # val= 00:05:16.229 19:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:16.229 19:05:53 -- accel/accel.sh@21 -- # val=0x1 00:05:16.229 19:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:16.229 19:05:53 -- accel/accel.sh@21 -- # val= 00:05:16.229 19:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:16.229 19:05:53 -- accel/accel.sh@21 -- # val= 00:05:16.229 19:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:16.229 19:05:53 -- accel/accel.sh@21 -- # val=crc32c 00:05:16.229 19:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.229 19:05:53 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:16.229 19:05:53 -- accel/accel.sh@21 -- # val=32 00:05:16.229 19:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:16.229 19:05:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:16.229 19:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:16.229 19:05:53 -- accel/accel.sh@21 -- # val= 00:05:16.229 19:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:16.229 19:05:53 -- accel/accel.sh@21 -- # val=software 00:05:16.229 19:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.229 19:05:53 -- accel/accel.sh@23 -- # accel_module=software 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:16.229 19:05:53 -- accel/accel.sh@21 -- # val=32 00:05:16.229 19:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:16.229 19:05:53 -- accel/accel.sh@21 -- # val=32 00:05:16.229 19:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:16.229 19:05:53 -- accel/accel.sh@21 -- # val=1 00:05:16.229 19:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:16.229 19:05:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:16.229 19:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:16.229 19:05:53 -- accel/accel.sh@21 -- # val=Yes 00:05:16.229 19:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:16.229 19:05:53 -- accel/accel.sh@21 -- # val= 00:05:16.229 19:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:16.229 19:05:53 -- accel/accel.sh@21 -- # val= 00:05:16.229 19:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:16.229 19:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:17.198 [2024-02-14 19:05:54.433758] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:17.457 19:05:54 -- accel/accel.sh@21 -- # val= 00:05:17.457 19:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.457 19:05:54 -- accel/accel.sh@20 -- # IFS=: 00:05:17.457 19:05:54 -- accel/accel.sh@20 -- # read -r var val 00:05:17.457 19:05:54 -- accel/accel.sh@21 -- # val= 00:05:17.457 19:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.457 19:05:54 -- accel/accel.sh@20 -- # IFS=: 00:05:17.457 19:05:54 -- accel/accel.sh@20 -- # read -r var val 00:05:17.457 19:05:54 -- accel/accel.sh@21 -- # val= 00:05:17.457 19:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.457 19:05:54 -- accel/accel.sh@20 -- # IFS=: 00:05:17.457 19:05:54 -- accel/accel.sh@20 -- # read -r var val 00:05:17.457 19:05:54 -- accel/accel.sh@21 -- # val= 00:05:17.457 19:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.457 19:05:54 -- accel/accel.sh@20 -- # IFS=: 00:05:17.457 19:05:54 -- accel/accel.sh@20 -- # read -r var val 00:05:17.457 19:05:54 -- accel/accel.sh@21 -- # val= 00:05:17.457 19:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.457 19:05:54 -- accel/accel.sh@20 -- # IFS=: 00:05:17.457 19:05:54 -- accel/accel.sh@20 -- # read -r var val 00:05:17.457 19:05:54 -- accel/accel.sh@21 -- # val= 00:05:17.457 19:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.457 19:05:54 -- accel/accel.sh@20 -- # IFS=: 00:05:17.457 19:05:54 -- accel/accel.sh@20 -- # read -r var val 00:05:17.457 19:05:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:17.457 19:05:54 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:17.457 19:05:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:17.457 00:05:17.457 real 0m3.785s 00:05:17.457 user 0m3.210s 00:05:17.457 sys 0m0.360s 00:05:17.457 19:05:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:17.457 ************************************ 00:05:17.457 END TEST accel_crc32c 00:05:17.457 ************************************ 00:05:17.457 19:05:54 -- common/autotest_common.sh@10 -- # set +x 00:05:17.457 19:05:54 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:17.457 19:05:54 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:05:17.457 19:05:54 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:17.457 19:05:54 -- common/autotest_common.sh@10 -- # set +x 00:05:17.715 ************************************ 00:05:17.715 START TEST accel_crc32c_C2 00:05:17.715 ************************************ 00:05:17.715 19:05:54 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:17.715 19:05:54 -- accel/accel.sh@16 -- # local accel_opc 00:05:17.715 19:05:54 -- accel/accel.sh@17 -- # local accel_module 00:05:17.715 19:05:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:17.715 19:05:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:17.715 19:05:54 -- accel/accel.sh@12 -- # build_accel_config 00:05:17.715 19:05:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:17.715 19:05:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.716 19:05:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.716 19:05:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:17.716 19:05:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:17.716 19:05:54 -- accel/accel.sh@41 -- # local IFS=, 00:05:17.716 19:05:54 -- accel/accel.sh@42 -- # jq -r . 00:05:17.716 [2024-02-14 19:05:54.894812] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:17.716 [2024-02-14 19:05:54.894919] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58685 ] 00:05:17.716 [2024-02-14 19:05:55.028404] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.974 [2024-02-14 19:05:55.192019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.974 [2024-02-14 19:05:55.192128] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:18.911 [2024-02-14 19:05:56.278693] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:19.478 19:05:56 -- accel/accel.sh@18 -- # out=' 00:05:19.478 SPDK Configuration: 00:05:19.478 Core mask: 0x1 00:05:19.478 00:05:19.478 Accel Perf Configuration: 00:05:19.478 Workload Type: crc32c 00:05:19.478 CRC-32C seed: 0 00:05:19.478 Transfer size: 4096 bytes 00:05:19.478 Vector count 2 00:05:19.478 Module: software 00:05:19.478 Queue depth: 32 00:05:19.478 Allocate depth: 32 00:05:19.478 # threads/core: 1 00:05:19.478 Run time: 1 seconds 00:05:19.478 Verify: Yes 00:05:19.478 00:05:19.478 Running for 1 seconds... 00:05:19.478 00:05:19.478 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:19.478 ------------------------------------------------------------------------------------ 00:05:19.478 0,0 358656/s 2802 MiB/s 0 0 00:05:19.478 ==================================================================================== 00:05:19.478 Total 358656/s 1401 MiB/s 0 0' 00:05:19.478 19:05:56 -- accel/accel.sh@20 -- # IFS=: 00:05:19.478 19:05:56 -- accel/accel.sh@20 -- # read -r var val 00:05:19.478 19:05:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:19.478 19:05:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:19.478 19:05:56 -- accel/accel.sh@12 -- # build_accel_config 00:05:19.478 19:05:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:19.478 19:05:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.478 19:05:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.478 19:05:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:19.478 19:05:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:19.478 19:05:56 -- accel/accel.sh@41 -- # local IFS=, 00:05:19.478 19:05:56 -- accel/accel.sh@42 -- # jq -r . 00:05:19.478 [2024-02-14 19:05:56.684051] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:19.478 [2024-02-14 19:05:56.684236] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58705 ] 00:05:19.478 [2024-02-14 19:05:56.831032] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.738 [2024-02-14 19:05:57.002144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.738 [2024-02-14 19:05:57.002255] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:19.738 19:05:57 -- accel/accel.sh@21 -- # val= 00:05:19.738 19:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # IFS=: 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # read -r var val 00:05:19.738 19:05:57 -- accel/accel.sh@21 -- # val= 00:05:19.738 19:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # IFS=: 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # read -r var val 00:05:19.738 19:05:57 -- accel/accel.sh@21 -- # val=0x1 00:05:19.738 19:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # IFS=: 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # read -r var val 00:05:19.738 19:05:57 -- accel/accel.sh@21 -- # val= 00:05:19.738 19:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # IFS=: 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # read -r var val 00:05:19.738 19:05:57 -- accel/accel.sh@21 -- # val= 00:05:19.738 19:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # IFS=: 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # read -r var val 00:05:19.738 19:05:57 -- accel/accel.sh@21 -- # val=crc32c 00:05:19.738 19:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.738 19:05:57 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # IFS=: 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # read -r var val 00:05:19.738 19:05:57 -- accel/accel.sh@21 -- # val=0 00:05:19.738 19:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # IFS=: 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # read -r var val 00:05:19.738 19:05:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:19.738 19:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # IFS=: 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # read -r var val 00:05:19.738 19:05:57 -- accel/accel.sh@21 -- # val= 00:05:19.738 19:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # IFS=: 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # read -r var val 00:05:19.738 19:05:57 -- accel/accel.sh@21 -- # val=software 00:05:19.738 19:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.738 19:05:57 -- accel/accel.sh@23 -- # accel_module=software 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # IFS=: 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # read -r var val 00:05:19.738 19:05:57 -- accel/accel.sh@21 -- # val=32 00:05:19.738 19:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # IFS=: 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # read -r var val 00:05:19.738 19:05:57 -- accel/accel.sh@21 -- # val=32 00:05:19.738 19:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # IFS=: 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # read -r var val 00:05:19.738 19:05:57 -- accel/accel.sh@21 -- # val=1 00:05:19.738 19:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # IFS=: 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # read -r var val 00:05:19.738 19:05:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:19.738 19:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # IFS=: 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # read -r var val 00:05:19.738 19:05:57 -- accel/accel.sh@21 -- # val=Yes 00:05:19.738 19:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # IFS=: 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # read -r var val 00:05:19.738 19:05:57 -- accel/accel.sh@21 -- # val= 00:05:19.738 19:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # IFS=: 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # read -r var val 00:05:19.738 19:05:57 -- accel/accel.sh@21 -- # val= 00:05:19.738 19:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # IFS=: 00:05:19.738 19:05:57 -- accel/accel.sh@20 -- # read -r var val 00:05:21.116 [2024-02-14 19:05:58.104270] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:21.376 19:05:58 -- accel/accel.sh@21 -- # val= 00:05:21.376 19:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.376 19:05:58 -- accel/accel.sh@20 -- # IFS=: 00:05:21.376 19:05:58 -- accel/accel.sh@20 -- # read -r var val 00:05:21.376 19:05:58 -- accel/accel.sh@21 -- # val= 00:05:21.376 19:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.376 19:05:58 -- accel/accel.sh@20 -- # IFS=: 00:05:21.376 19:05:58 -- accel/accel.sh@20 -- # read -r var val 00:05:21.376 19:05:58 -- accel/accel.sh@21 -- # val= 00:05:21.376 19:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.376 19:05:58 -- accel/accel.sh@20 -- # IFS=: 00:05:21.376 19:05:58 -- accel/accel.sh@20 -- # read -r var val 00:05:21.376 19:05:58 -- accel/accel.sh@21 -- # val= 00:05:21.376 19:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.376 19:05:58 -- accel/accel.sh@20 -- # IFS=: 00:05:21.376 19:05:58 -- accel/accel.sh@20 -- # read -r var val 00:05:21.376 19:05:58 -- accel/accel.sh@21 -- # val= 00:05:21.376 19:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.376 19:05:58 -- accel/accel.sh@20 -- # IFS=: 00:05:21.376 19:05:58 -- accel/accel.sh@20 -- # read -r var val 00:05:21.376 19:05:58 -- accel/accel.sh@21 -- # val= 00:05:21.376 19:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.376 19:05:58 -- accel/accel.sh@20 -- # IFS=: 00:05:21.376 19:05:58 -- accel/accel.sh@20 -- # read -r var val 00:05:21.376 19:05:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:21.376 19:05:58 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:21.376 19:05:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:21.376 00:05:21.376 real 0m3.829s 00:05:21.376 user 0m3.259s 00:05:21.376 sys 0m0.357s 00:05:21.376 ************************************ 00:05:21.376 END TEST accel_crc32c_C2 00:05:21.376 ************************************ 00:05:21.376 19:05:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:21.376 19:05:58 -- common/autotest_common.sh@10 -- # set +x 00:05:21.376 19:05:58 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:21.376 19:05:58 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:05:21.376 19:05:58 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:21.376 19:05:58 -- common/autotest_common.sh@10 -- # set +x 00:05:21.376 ************************************ 00:05:21.376 START TEST accel_copy 00:05:21.376 ************************************ 00:05:21.376 19:05:58 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w copy -y 00:05:21.376 19:05:58 -- accel/accel.sh@16 -- # local accel_opc 00:05:21.376 19:05:58 -- accel/accel.sh@17 -- # local accel_module 00:05:21.376 19:05:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:05:21.376 19:05:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:21.376 19:05:58 -- accel/accel.sh@12 -- # build_accel_config 00:05:21.376 19:05:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:21.376 19:05:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.376 19:05:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.376 19:05:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:21.376 19:05:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:21.376 19:05:58 -- accel/accel.sh@41 -- # local IFS=, 00:05:21.376 19:05:58 -- accel/accel.sh@42 -- # jq -r . 00:05:21.635 [2024-02-14 19:05:58.802082] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:21.635 [2024-02-14 19:05:58.802275] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58745 ] 00:05:21.635 [2024-02-14 19:05:58.942223] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.894 [2024-02-14 19:05:59.187564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.894 [2024-02-14 19:05:59.187703] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:23.272 [2024-02-14 19:06:00.354708] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:23.531 19:06:00 -- accel/accel.sh@18 -- # out=' 00:05:23.531 SPDK Configuration: 00:05:23.531 Core mask: 0x1 00:05:23.531 00:05:23.531 Accel Perf Configuration: 00:05:23.531 Workload Type: copy 00:05:23.531 Transfer size: 4096 bytes 00:05:23.531 Vector count 1 00:05:23.531 Module: software 00:05:23.531 Queue depth: 32 00:05:23.531 Allocate depth: 32 00:05:23.531 # threads/core: 1 00:05:23.531 Run time: 1 seconds 00:05:23.531 Verify: Yes 00:05:23.531 00:05:23.531 Running for 1 seconds... 00:05:23.531 00:05:23.531 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:23.531 ------------------------------------------------------------------------------------ 00:05:23.531 0,0 324192/s 1266 MiB/s 0 0 00:05:23.531 ==================================================================================== 00:05:23.531 Total 324192/s 1266 MiB/s 0 0' 00:05:23.531 19:06:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:23.531 19:06:00 -- accel/accel.sh@20 -- # IFS=: 00:05:23.531 19:06:00 -- accel/accel.sh@20 -- # read -r var val 00:05:23.531 19:06:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:23.531 19:06:00 -- accel/accel.sh@12 -- # build_accel_config 00:05:23.531 19:06:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:23.531 19:06:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.531 19:06:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.531 19:06:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:23.531 19:06:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:23.531 19:06:00 -- accel/accel.sh@41 -- # local IFS=, 00:05:23.531 19:06:00 -- accel/accel.sh@42 -- # jq -r . 00:05:23.531 [2024-02-14 19:06:00.730405] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:23.531 [2024-02-14 19:06:00.730511] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58770 ] 00:05:23.531 [2024-02-14 19:06:00.864459] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.791 [2024-02-14 19:06:01.028455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.791 [2024-02-14 19:06:01.028608] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:23.791 19:06:01 -- accel/accel.sh@21 -- # val= 00:05:23.791 19:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.791 19:06:01 -- accel/accel.sh@20 -- # IFS=: 00:05:23.791 19:06:01 -- accel/accel.sh@20 -- # read -r var val 00:05:23.791 19:06:01 -- accel/accel.sh@21 -- # val= 00:05:23.791 19:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.791 19:06:01 -- accel/accel.sh@20 -- # IFS=: 00:05:23.791 19:06:01 -- accel/accel.sh@20 -- # read -r var val 00:05:23.791 19:06:01 -- accel/accel.sh@21 -- # val=0x1 00:05:23.791 19:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.791 19:06:01 -- accel/accel.sh@20 -- # IFS=: 00:05:23.791 19:06:01 -- accel/accel.sh@20 -- # read -r var val 00:05:23.791 19:06:01 -- accel/accel.sh@21 -- # val= 00:05:23.791 19:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.791 19:06:01 -- accel/accel.sh@20 -- # IFS=: 00:05:23.791 19:06:01 -- accel/accel.sh@20 -- # read -r var val 00:05:23.791 19:06:01 -- accel/accel.sh@21 -- # val= 00:05:23.791 19:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.791 19:06:01 -- accel/accel.sh@20 -- # IFS=: 00:05:23.791 19:06:01 -- accel/accel.sh@20 -- # read -r var val 00:05:23.791 19:06:01 -- accel/accel.sh@21 -- # val=copy 00:05:23.791 19:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.791 19:06:01 -- accel/accel.sh@24 -- # accel_opc=copy 00:05:23.791 19:06:01 -- accel/accel.sh@20 -- # IFS=: 00:05:23.791 19:06:01 -- accel/accel.sh@20 -- # read -r var val 00:05:23.791 19:06:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:23.791 19:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.791 19:06:01 -- accel/accel.sh@20 -- # IFS=: 00:05:23.791 19:06:01 -- accel/accel.sh@20 -- # read -r var val 00:05:23.791 19:06:01 -- accel/accel.sh@21 -- # val= 00:05:23.791 19:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.791 19:06:01 -- accel/accel.sh@20 -- # IFS=: 00:05:23.791 19:06:01 -- accel/accel.sh@20 -- # read -r var val 00:05:23.791 19:06:01 -- accel/accel.sh@21 -- # val=software 00:05:23.791 19:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.791 19:06:01 -- accel/accel.sh@23 -- # accel_module=software 00:05:23.791 19:06:01 -- accel/accel.sh@20 -- # IFS=: 00:05:23.791 19:06:01 -- accel/accel.sh@20 -- # read -r var val 00:05:23.791 19:06:01 -- accel/accel.sh@21 -- # val=32 00:05:23.791 19:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.791 19:06:01 -- accel/accel.sh@20 -- # IFS=: 00:05:23.791 19:06:01 -- accel/accel.sh@20 -- # read -r var val 00:05:23.791 19:06:01 -- accel/accel.sh@21 -- # val=32 00:05:23.791 19:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.791 19:06:01 -- accel/accel.sh@20 -- # IFS=: 00:05:23.791 19:06:01 -- accel/accel.sh@20 -- # read -r var val 00:05:23.791 19:06:01 -- accel/accel.sh@21 -- # val=1 00:05:23.791 19:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.791 19:06:01 -- accel/accel.sh@20 -- # IFS=: 00:05:23.792 19:06:01 -- accel/accel.sh@20 -- # read -r var val 00:05:23.792 19:06:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:23.792 19:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.792 19:06:01 -- accel/accel.sh@20 -- # IFS=: 00:05:23.792 19:06:01 -- accel/accel.sh@20 -- # read -r var val 00:05:23.792 19:06:01 -- accel/accel.sh@21 -- # val=Yes 00:05:23.792 19:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.792 19:06:01 -- accel/accel.sh@20 -- # IFS=: 00:05:23.792 19:06:01 -- accel/accel.sh@20 -- # read -r var val 00:05:23.792 19:06:01 -- accel/accel.sh@21 -- # val= 00:05:23.792 19:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.792 19:06:01 -- accel/accel.sh@20 -- # IFS=: 00:05:23.792 19:06:01 -- accel/accel.sh@20 -- # read -r var val 00:05:23.792 19:06:01 -- accel/accel.sh@21 -- # val= 00:05:23.792 19:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.792 19:06:01 -- accel/accel.sh@20 -- # IFS=: 00:05:23.792 19:06:01 -- accel/accel.sh@20 -- # read -r var val 00:05:25.169 [2024-02-14 19:06:02.169434] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:25.429 19:06:02 -- accel/accel.sh@21 -- # val= 00:05:25.429 19:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.429 19:06:02 -- accel/accel.sh@20 -- # IFS=: 00:05:25.429 19:06:02 -- accel/accel.sh@20 -- # read -r var val 00:05:25.429 19:06:02 -- accel/accel.sh@21 -- # val= 00:05:25.429 19:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.429 19:06:02 -- accel/accel.sh@20 -- # IFS=: 00:05:25.429 19:06:02 -- accel/accel.sh@20 -- # read -r var val 00:05:25.429 19:06:02 -- accel/accel.sh@21 -- # val= 00:05:25.429 19:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.429 19:06:02 -- accel/accel.sh@20 -- # IFS=: 00:05:25.429 19:06:02 -- accel/accel.sh@20 -- # read -r var val 00:05:25.429 19:06:02 -- accel/accel.sh@21 -- # val= 00:05:25.429 19:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.429 19:06:02 -- accel/accel.sh@20 -- # IFS=: 00:05:25.429 19:06:02 -- accel/accel.sh@20 -- # read -r var val 00:05:25.429 19:06:02 -- accel/accel.sh@21 -- # val= 00:05:25.429 19:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.429 19:06:02 -- accel/accel.sh@20 -- # IFS=: 00:05:25.429 19:06:02 -- accel/accel.sh@20 -- # read -r var val 00:05:25.429 19:06:02 -- accel/accel.sh@21 -- # val= 00:05:25.429 19:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.429 19:06:02 -- accel/accel.sh@20 -- # IFS=: 00:05:25.429 19:06:02 -- accel/accel.sh@20 -- # read -r var val 00:05:25.429 ************************************ 00:05:25.429 END TEST accel_copy 00:05:25.429 ************************************ 00:05:25.429 19:06:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:25.429 19:06:02 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:05:25.429 19:06:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:25.429 00:05:25.429 real 0m3.822s 00:05:25.429 user 0m3.083s 00:05:25.429 sys 0m0.520s 00:05:25.429 19:06:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.429 19:06:02 -- common/autotest_common.sh@10 -- # set +x 00:05:25.429 19:06:02 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:25.429 19:06:02 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:05:25.429 19:06:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:25.429 19:06:02 -- common/autotest_common.sh@10 -- # set +x 00:05:25.429 ************************************ 00:05:25.429 START TEST accel_fill 00:05:25.429 ************************************ 00:05:25.429 19:06:02 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:25.429 19:06:02 -- accel/accel.sh@16 -- # local accel_opc 00:05:25.429 19:06:02 -- accel/accel.sh@17 -- # local accel_module 00:05:25.429 19:06:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:25.429 19:06:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:25.429 19:06:02 -- accel/accel.sh@12 -- # build_accel_config 00:05:25.429 19:06:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:25.429 19:06:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.429 19:06:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.429 19:06:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:25.429 19:06:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:25.429 19:06:02 -- accel/accel.sh@41 -- # local IFS=, 00:05:25.429 19:06:02 -- accel/accel.sh@42 -- # jq -r . 00:05:25.429 [2024-02-14 19:06:02.670559] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:25.429 [2024-02-14 19:06:02.670677] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58810 ] 00:05:25.429 [2024-02-14 19:06:02.809474] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.688 [2024-02-14 19:06:02.993977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.688 [2024-02-14 19:06:02.994097] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:27.063 [2024-02-14 19:06:04.098322] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:27.063 19:06:04 -- accel/accel.sh@18 -- # out=' 00:05:27.063 SPDK Configuration: 00:05:27.063 Core mask: 0x1 00:05:27.063 00:05:27.063 Accel Perf Configuration: 00:05:27.063 Workload Type: fill 00:05:27.063 Fill pattern: 0x80 00:05:27.063 Transfer size: 4096 bytes 00:05:27.063 Vector count 1 00:05:27.063 Module: software 00:05:27.063 Queue depth: 64 00:05:27.063 Allocate depth: 64 00:05:27.063 # threads/core: 1 00:05:27.063 Run time: 1 seconds 00:05:27.063 Verify: Yes 00:05:27.063 00:05:27.063 Running for 1 seconds... 00:05:27.063 00:05:27.063 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:27.063 ------------------------------------------------------------------------------------ 00:05:27.063 0,0 511680/s 1998 MiB/s 0 0 00:05:27.063 ==================================================================================== 00:05:27.063 Total 511680/s 1998 MiB/s 0 0' 00:05:27.063 19:06:04 -- accel/accel.sh@20 -- # IFS=: 00:05:27.063 19:06:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:27.063 19:06:04 -- accel/accel.sh@20 -- # read -r var val 00:05:27.063 19:06:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:27.063 19:06:04 -- accel/accel.sh@12 -- # build_accel_config 00:05:27.063 19:06:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:27.063 19:06:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.063 19:06:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.063 19:06:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:27.063 19:06:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:27.063 19:06:04 -- accel/accel.sh@41 -- # local IFS=, 00:05:27.063 19:06:04 -- accel/accel.sh@42 -- # jq -r . 00:05:27.063 [2024-02-14 19:06:04.385397] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:27.063 [2024-02-14 19:06:04.385522] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58835 ] 00:05:27.323 [2024-02-14 19:06:04.518222] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.323 [2024-02-14 19:06:04.657249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.323 [2024-02-14 19:06:04.657341] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:27.323 19:06:04 -- accel/accel.sh@21 -- # val= 00:05:27.323 19:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.323 19:06:04 -- accel/accel.sh@20 -- # IFS=: 00:05:27.323 19:06:04 -- accel/accel.sh@20 -- # read -r var val 00:05:27.323 19:06:04 -- accel/accel.sh@21 -- # val= 00:05:27.323 19:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.323 19:06:04 -- accel/accel.sh@20 -- # IFS=: 00:05:27.323 19:06:04 -- accel/accel.sh@20 -- # read -r var val 00:05:27.323 19:06:04 -- accel/accel.sh@21 -- # val=0x1 00:05:27.323 19:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.323 19:06:04 -- accel/accel.sh@20 -- # IFS=: 00:05:27.323 19:06:04 -- accel/accel.sh@20 -- # read -r var val 00:05:27.323 19:06:04 -- accel/accel.sh@21 -- # val= 00:05:27.323 19:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.323 19:06:04 -- accel/accel.sh@20 -- # IFS=: 00:05:27.323 19:06:04 -- accel/accel.sh@20 -- # read -r var val 00:05:27.323 19:06:04 -- accel/accel.sh@21 -- # val= 00:05:27.323 19:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.323 19:06:04 -- accel/accel.sh@20 -- # IFS=: 00:05:27.323 19:06:04 -- accel/accel.sh@20 -- # read -r var val 00:05:27.323 19:06:04 -- accel/accel.sh@21 -- # val=fill 00:05:27.323 19:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.323 19:06:04 -- accel/accel.sh@24 -- # accel_opc=fill 00:05:27.323 19:06:04 -- accel/accel.sh@20 -- # IFS=: 00:05:27.323 19:06:04 -- accel/accel.sh@20 -- # read -r var val 00:05:27.323 19:06:04 -- accel/accel.sh@21 -- # val=0x80 00:05:27.323 19:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.323 19:06:04 -- accel/accel.sh@20 -- # IFS=: 00:05:27.323 19:06:04 -- accel/accel.sh@20 -- # read -r var val 00:05:27.323 19:06:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:27.323 19:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.323 19:06:04 -- accel/accel.sh@20 -- # IFS=: 00:05:27.323 19:06:04 -- accel/accel.sh@20 -- # read -r var val 00:05:27.323 19:06:04 -- accel/accel.sh@21 -- # val= 00:05:27.582 19:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.582 19:06:04 -- accel/accel.sh@20 -- # IFS=: 00:05:27.582 19:06:04 -- accel/accel.sh@20 -- # read -r var val 00:05:27.582 19:06:04 -- accel/accel.sh@21 -- # val=software 00:05:27.582 19:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.582 19:06:04 -- accel/accel.sh@23 -- # accel_module=software 00:05:27.582 19:06:04 -- accel/accel.sh@20 -- # IFS=: 00:05:27.582 19:06:04 -- accel/accel.sh@20 -- # read -r var val 00:05:27.582 19:06:04 -- accel/accel.sh@21 -- # val=64 00:05:27.582 19:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.582 19:06:04 -- accel/accel.sh@20 -- # IFS=: 00:05:27.582 19:06:04 -- accel/accel.sh@20 -- # read -r var val 00:05:27.582 19:06:04 -- accel/accel.sh@21 -- # val=64 00:05:27.582 19:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.582 19:06:04 -- accel/accel.sh@20 -- # IFS=: 00:05:27.582 19:06:04 -- accel/accel.sh@20 -- # read -r var val 00:05:27.582 19:06:04 -- accel/accel.sh@21 -- # val=1 00:05:27.582 19:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.582 19:06:04 -- accel/accel.sh@20 -- # IFS=: 00:05:27.582 19:06:04 -- accel/accel.sh@20 -- # read -r var val 00:05:27.582 19:06:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:27.582 19:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.582 19:06:04 -- accel/accel.sh@20 -- # IFS=: 00:05:27.582 19:06:04 -- accel/accel.sh@20 -- # read -r var val 00:05:27.582 19:06:04 -- accel/accel.sh@21 -- # val=Yes 00:05:27.582 19:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.582 19:06:04 -- accel/accel.sh@20 -- # IFS=: 00:05:27.582 19:06:04 -- accel/accel.sh@20 -- # read -r var val 00:05:27.582 19:06:04 -- accel/accel.sh@21 -- # val= 00:05:27.582 19:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.582 19:06:04 -- accel/accel.sh@20 -- # IFS=: 00:05:27.582 19:06:04 -- accel/accel.sh@20 -- # read -r var val 00:05:27.582 19:06:04 -- accel/accel.sh@21 -- # val= 00:05:27.582 19:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.582 19:06:04 -- accel/accel.sh@20 -- # IFS=: 00:05:27.582 19:06:04 -- accel/accel.sh@20 -- # read -r var val 00:05:28.518 [2024-02-14 19:06:05.731784] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:28.778 19:06:05 -- accel/accel.sh@21 -- # val= 00:05:28.778 19:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.778 19:06:05 -- accel/accel.sh@20 -- # IFS=: 00:05:28.778 19:06:05 -- accel/accel.sh@20 -- # read -r var val 00:05:28.778 19:06:05 -- accel/accel.sh@21 -- # val= 00:05:28.778 19:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.778 19:06:05 -- accel/accel.sh@20 -- # IFS=: 00:05:28.778 19:06:05 -- accel/accel.sh@20 -- # read -r var val 00:05:28.778 19:06:05 -- accel/accel.sh@21 -- # val= 00:05:28.778 19:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.778 19:06:05 -- accel/accel.sh@20 -- # IFS=: 00:05:28.778 19:06:05 -- accel/accel.sh@20 -- # read -r var val 00:05:28.778 19:06:05 -- accel/accel.sh@21 -- # val= 00:05:28.778 19:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.778 19:06:05 -- accel/accel.sh@20 -- # IFS=: 00:05:28.778 19:06:05 -- accel/accel.sh@20 -- # read -r var val 00:05:28.778 19:06:05 -- accel/accel.sh@21 -- # val= 00:05:28.778 19:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.778 19:06:05 -- accel/accel.sh@20 -- # IFS=: 00:05:28.778 19:06:05 -- accel/accel.sh@20 -- # read -r var val 00:05:28.778 19:06:05 -- accel/accel.sh@21 -- # val= 00:05:28.778 19:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.778 19:06:05 -- accel/accel.sh@20 -- # IFS=: 00:05:28.778 19:06:05 -- accel/accel.sh@20 -- # read -r var val 00:05:28.778 19:06:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:28.778 19:06:05 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:05:28.778 19:06:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:28.778 00:05:28.778 real 0m3.345s 00:05:28.778 user 0m2.793s 00:05:28.778 sys 0m0.341s 00:05:28.778 19:06:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:28.778 ************************************ 00:05:28.778 END TEST accel_fill 00:05:28.778 ************************************ 00:05:28.778 19:06:05 -- common/autotest_common.sh@10 -- # set +x 00:05:28.778 19:06:06 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:28.778 19:06:06 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:05:28.778 19:06:06 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:28.778 19:06:06 -- common/autotest_common.sh@10 -- # set +x 00:05:28.778 ************************************ 00:05:28.778 START TEST accel_copy_crc32c 00:05:28.778 ************************************ 00:05:28.778 19:06:06 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w copy_crc32c -y 00:05:28.778 19:06:06 -- accel/accel.sh@16 -- # local accel_opc 00:05:28.778 19:06:06 -- accel/accel.sh@17 -- # local accel_module 00:05:28.778 19:06:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:28.778 19:06:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:28.778 19:06:06 -- accel/accel.sh@12 -- # build_accel_config 00:05:28.778 19:06:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:28.778 19:06:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.778 19:06:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.779 19:06:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:28.779 19:06:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:28.779 19:06:06 -- accel/accel.sh@41 -- # local IFS=, 00:05:28.779 19:06:06 -- accel/accel.sh@42 -- # jq -r . 00:05:28.779 [2024-02-14 19:06:06.061290] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:28.779 [2024-02-14 19:06:06.061379] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58864 ] 00:05:28.779 [2024-02-14 19:06:06.192607] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.040 [2024-02-14 19:06:06.329358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.040 [2024-02-14 19:06:06.329464] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:30.417 [2024-02-14 19:06:07.408314] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:30.417 19:06:07 -- accel/accel.sh@18 -- # out=' 00:05:30.417 SPDK Configuration: 00:05:30.417 Core mask: 0x1 00:05:30.417 00:05:30.417 Accel Perf Configuration: 00:05:30.417 Workload Type: copy_crc32c 00:05:30.417 CRC-32C seed: 0 00:05:30.417 Vector size: 4096 bytes 00:05:30.417 Transfer size: 4096 bytes 00:05:30.417 Vector count 1 00:05:30.417 Module: software 00:05:30.417 Queue depth: 32 00:05:30.417 Allocate depth: 32 00:05:30.417 # threads/core: 1 00:05:30.417 Run time: 1 seconds 00:05:30.417 Verify: Yes 00:05:30.417 00:05:30.417 Running for 1 seconds... 00:05:30.417 00:05:30.417 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:30.417 ------------------------------------------------------------------------------------ 00:05:30.417 0,0 265248/s 1036 MiB/s 0 0 00:05:30.417 ==================================================================================== 00:05:30.417 Total 265248/s 1036 MiB/s 0 0' 00:05:30.417 19:06:07 -- accel/accel.sh@20 -- # IFS=: 00:05:30.417 19:06:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:30.417 19:06:07 -- accel/accel.sh@20 -- # read -r var val 00:05:30.417 19:06:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:30.417 19:06:07 -- accel/accel.sh@12 -- # build_accel_config 00:05:30.417 19:06:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:30.417 19:06:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.417 19:06:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.417 19:06:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:30.417 19:06:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:30.417 19:06:07 -- accel/accel.sh@41 -- # local IFS=, 00:05:30.417 19:06:07 -- accel/accel.sh@42 -- # jq -r . 00:05:30.417 [2024-02-14 19:06:07.692038] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:30.417 [2024-02-14 19:06:07.692145] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58889 ] 00:05:30.417 [2024-02-14 19:06:07.831990] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.676 [2024-02-14 19:06:07.969990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.676 [2024-02-14 19:06:07.970082] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:30.676 19:06:08 -- accel/accel.sh@21 -- # val= 00:05:30.676 19:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # IFS=: 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # read -r var val 00:05:30.676 19:06:08 -- accel/accel.sh@21 -- # val= 00:05:30.676 19:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # IFS=: 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # read -r var val 00:05:30.676 19:06:08 -- accel/accel.sh@21 -- # val=0x1 00:05:30.676 19:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # IFS=: 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # read -r var val 00:05:30.676 19:06:08 -- accel/accel.sh@21 -- # val= 00:05:30.676 19:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # IFS=: 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # read -r var val 00:05:30.676 19:06:08 -- accel/accel.sh@21 -- # val= 00:05:30.676 19:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # IFS=: 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # read -r var val 00:05:30.676 19:06:08 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:30.676 19:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.676 19:06:08 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # IFS=: 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # read -r var val 00:05:30.676 19:06:08 -- accel/accel.sh@21 -- # val=0 00:05:30.676 19:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # IFS=: 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # read -r var val 00:05:30.676 19:06:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:30.676 19:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # IFS=: 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # read -r var val 00:05:30.676 19:06:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:30.676 19:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # IFS=: 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # read -r var val 00:05:30.676 19:06:08 -- accel/accel.sh@21 -- # val= 00:05:30.676 19:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # IFS=: 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # read -r var val 00:05:30.676 19:06:08 -- accel/accel.sh@21 -- # val=software 00:05:30.676 19:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.676 19:06:08 -- accel/accel.sh@23 -- # accel_module=software 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # IFS=: 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # read -r var val 00:05:30.676 19:06:08 -- accel/accel.sh@21 -- # val=32 00:05:30.676 19:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # IFS=: 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # read -r var val 00:05:30.676 19:06:08 -- accel/accel.sh@21 -- # val=32 00:05:30.676 19:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # IFS=: 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # read -r var val 00:05:30.676 19:06:08 -- accel/accel.sh@21 -- # val=1 00:05:30.676 19:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # IFS=: 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # read -r var val 00:05:30.676 19:06:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:30.676 19:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # IFS=: 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # read -r var val 00:05:30.676 19:06:08 -- accel/accel.sh@21 -- # val=Yes 00:05:30.676 19:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # IFS=: 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # read -r var val 00:05:30.676 19:06:08 -- accel/accel.sh@21 -- # val= 00:05:30.676 19:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # IFS=: 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # read -r var val 00:05:30.676 19:06:08 -- accel/accel.sh@21 -- # val= 00:05:30.676 19:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # IFS=: 00:05:30.676 19:06:08 -- accel/accel.sh@20 -- # read -r var val 00:05:32.052 [2024-02-14 19:06:09.046074] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:32.052 19:06:09 -- accel/accel.sh@21 -- # val= 00:05:32.052 19:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.052 19:06:09 -- accel/accel.sh@20 -- # IFS=: 00:05:32.052 19:06:09 -- accel/accel.sh@20 -- # read -r var val 00:05:32.052 19:06:09 -- accel/accel.sh@21 -- # val= 00:05:32.052 19:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.052 19:06:09 -- accel/accel.sh@20 -- # IFS=: 00:05:32.052 19:06:09 -- accel/accel.sh@20 -- # read -r var val 00:05:32.052 19:06:09 -- accel/accel.sh@21 -- # val= 00:05:32.052 19:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.052 19:06:09 -- accel/accel.sh@20 -- # IFS=: 00:05:32.052 19:06:09 -- accel/accel.sh@20 -- # read -r var val 00:05:32.052 19:06:09 -- accel/accel.sh@21 -- # val= 00:05:32.052 19:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.052 19:06:09 -- accel/accel.sh@20 -- # IFS=: 00:05:32.052 19:06:09 -- accel/accel.sh@20 -- # read -r var val 00:05:32.052 19:06:09 -- accel/accel.sh@21 -- # val= 00:05:32.052 19:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.052 19:06:09 -- accel/accel.sh@20 -- # IFS=: 00:05:32.052 19:06:09 -- accel/accel.sh@20 -- # read -r var val 00:05:32.052 19:06:09 -- accel/accel.sh@21 -- # val= 00:05:32.052 19:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.052 19:06:09 -- accel/accel.sh@20 -- # IFS=: 00:05:32.052 19:06:09 -- accel/accel.sh@20 -- # read -r var val 00:05:32.052 19:06:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:32.052 19:06:09 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:32.052 19:06:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:32.052 00:05:32.052 real 0m3.270s 00:05:32.052 user 0m2.775s 00:05:32.052 sys 0m0.288s 00:05:32.052 19:06:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:32.052 19:06:09 -- common/autotest_common.sh@10 -- # set +x 00:05:32.052 ************************************ 00:05:32.052 END TEST accel_copy_crc32c 00:05:32.052 ************************************ 00:05:32.052 19:06:09 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:32.052 19:06:09 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:05:32.052 19:06:09 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:32.052 19:06:09 -- common/autotest_common.sh@10 -- # set +x 00:05:32.052 ************************************ 00:05:32.052 START TEST accel_copy_crc32c_C2 00:05:32.052 ************************************ 00:05:32.052 19:06:09 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:32.052 19:06:09 -- accel/accel.sh@16 -- # local accel_opc 00:05:32.052 19:06:09 -- accel/accel.sh@17 -- # local accel_module 00:05:32.052 19:06:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:32.052 19:06:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:32.052 19:06:09 -- accel/accel.sh@12 -- # build_accel_config 00:05:32.052 19:06:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:32.052 19:06:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.052 19:06:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.052 19:06:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:32.052 19:06:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:32.052 19:06:09 -- accel/accel.sh@41 -- # local IFS=, 00:05:32.052 19:06:09 -- accel/accel.sh@42 -- # jq -r . 00:05:32.052 [2024-02-14 19:06:09.395629] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:32.052 [2024-02-14 19:06:09.395714] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58924 ] 00:05:32.310 [2024-02-14 19:06:09.531066] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.310 [2024-02-14 19:06:09.646814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.310 [2024-02-14 19:06:09.646903] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:33.685 [2024-02-14 19:06:10.721831] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:33.685 19:06:10 -- accel/accel.sh@18 -- # out=' 00:05:33.685 SPDK Configuration: 00:05:33.685 Core mask: 0x1 00:05:33.685 00:05:33.685 Accel Perf Configuration: 00:05:33.685 Workload Type: copy_crc32c 00:05:33.685 CRC-32C seed: 0 00:05:33.685 Vector size: 4096 bytes 00:05:33.685 Transfer size: 8192 bytes 00:05:33.685 Vector count 2 00:05:33.685 Module: software 00:05:33.685 Queue depth: 32 00:05:33.685 Allocate depth: 32 00:05:33.685 # threads/core: 1 00:05:33.685 Run time: 1 seconds 00:05:33.685 Verify: Yes 00:05:33.685 00:05:33.685 Running for 1 seconds... 00:05:33.685 00:05:33.685 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:33.685 ------------------------------------------------------------------------------------ 00:05:33.685 0,0 200480/s 1566 MiB/s 0 0 00:05:33.685 ==================================================================================== 00:05:33.685 Total 200480/s 783 MiB/s 0 0' 00:05:33.685 19:06:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:33.685 19:06:10 -- accel/accel.sh@20 -- # IFS=: 00:05:33.685 19:06:10 -- accel/accel.sh@20 -- # read -r var val 00:05:33.685 19:06:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:33.685 19:06:10 -- accel/accel.sh@12 -- # build_accel_config 00:05:33.685 19:06:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:33.685 19:06:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.685 19:06:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.685 19:06:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:33.685 19:06:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:33.685 19:06:11 -- accel/accel.sh@41 -- # local IFS=, 00:05:33.685 19:06:11 -- accel/accel.sh@42 -- # jq -r . 00:05:33.685 [2024-02-14 19:06:11.024721] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:33.685 [2024-02-14 19:06:11.024830] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58943 ] 00:05:33.943 [2024-02-14 19:06:11.159278] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.943 [2024-02-14 19:06:11.312164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.943 [2024-02-14 19:06:11.312266] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:34.202 19:06:11 -- accel/accel.sh@21 -- # val= 00:05:34.202 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:34.202 19:06:11 -- accel/accel.sh@21 -- # val= 00:05:34.202 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:34.202 19:06:11 -- accel/accel.sh@21 -- # val=0x1 00:05:34.202 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:34.202 19:06:11 -- accel/accel.sh@21 -- # val= 00:05:34.202 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:34.202 19:06:11 -- accel/accel.sh@21 -- # val= 00:05:34.202 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:34.202 19:06:11 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:34.202 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.202 19:06:11 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:34.202 19:06:11 -- accel/accel.sh@21 -- # val=0 00:05:34.202 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:34.202 19:06:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:34.202 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:34.202 19:06:11 -- accel/accel.sh@21 -- # val='8192 bytes' 00:05:34.202 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:34.202 19:06:11 -- accel/accel.sh@21 -- # val= 00:05:34.202 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:34.202 19:06:11 -- accel/accel.sh@21 -- # val=software 00:05:34.202 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.202 19:06:11 -- accel/accel.sh@23 -- # accel_module=software 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:34.202 19:06:11 -- accel/accel.sh@21 -- # val=32 00:05:34.202 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:34.202 19:06:11 -- accel/accel.sh@21 -- # val=32 00:05:34.202 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:34.202 19:06:11 -- accel/accel.sh@21 -- # val=1 00:05:34.202 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:34.202 19:06:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:34.202 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:34.202 19:06:11 -- accel/accel.sh@21 -- # val=Yes 00:05:34.202 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:34.202 19:06:11 -- accel/accel.sh@21 -- # val= 00:05:34.202 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:34.202 19:06:11 -- accel/accel.sh@21 -- # val= 00:05:34.202 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:34.202 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:35.136 [2024-02-14 19:06:12.392810] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:35.396 19:06:12 -- accel/accel.sh@21 -- # val= 00:05:35.396 19:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.396 19:06:12 -- accel/accel.sh@20 -- # IFS=: 00:05:35.396 19:06:12 -- accel/accel.sh@20 -- # read -r var val 00:05:35.396 19:06:12 -- accel/accel.sh@21 -- # val= 00:05:35.396 19:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.396 19:06:12 -- accel/accel.sh@20 -- # IFS=: 00:05:35.396 19:06:12 -- accel/accel.sh@20 -- # read -r var val 00:05:35.396 19:06:12 -- accel/accel.sh@21 -- # val= 00:05:35.396 19:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.396 19:06:12 -- accel/accel.sh@20 -- # IFS=: 00:05:35.396 19:06:12 -- accel/accel.sh@20 -- # read -r var val 00:05:35.396 19:06:12 -- accel/accel.sh@21 -- # val= 00:05:35.396 19:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.396 19:06:12 -- accel/accel.sh@20 -- # IFS=: 00:05:35.396 19:06:12 -- accel/accel.sh@20 -- # read -r var val 00:05:35.396 19:06:12 -- accel/accel.sh@21 -- # val= 00:05:35.396 19:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.396 19:06:12 -- accel/accel.sh@20 -- # IFS=: 00:05:35.396 19:06:12 -- accel/accel.sh@20 -- # read -r var val 00:05:35.396 19:06:12 -- accel/accel.sh@21 -- # val= 00:05:35.396 19:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.396 19:06:12 -- accel/accel.sh@20 -- # IFS=: 00:05:35.396 19:06:12 -- accel/accel.sh@20 -- # read -r var val 00:05:35.396 19:06:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:35.396 19:06:12 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:35.396 19:06:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:35.396 00:05:35.396 real 0m3.316s 00:05:35.396 user 0m2.811s 00:05:35.396 sys 0m0.295s 00:05:35.396 19:06:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.396 19:06:12 -- common/autotest_common.sh@10 -- # set +x 00:05:35.396 ************************************ 00:05:35.396 END TEST accel_copy_crc32c_C2 00:05:35.396 ************************************ 00:05:35.396 19:06:12 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:35.396 19:06:12 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:05:35.396 19:06:12 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:35.396 19:06:12 -- common/autotest_common.sh@10 -- # set +x 00:05:35.396 ************************************ 00:05:35.396 START TEST accel_dualcast 00:05:35.396 ************************************ 00:05:35.396 19:06:12 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w dualcast -y 00:05:35.396 19:06:12 -- accel/accel.sh@16 -- # local accel_opc 00:05:35.396 19:06:12 -- accel/accel.sh@17 -- # local accel_module 00:05:35.396 19:06:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:05:35.396 19:06:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:35.396 19:06:12 -- accel/accel.sh@12 -- # build_accel_config 00:05:35.396 19:06:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:35.396 19:06:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.396 19:06:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.396 19:06:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:35.396 19:06:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:35.396 19:06:12 -- accel/accel.sh@41 -- # local IFS=, 00:05:35.396 19:06:12 -- accel/accel.sh@42 -- # jq -r . 00:05:35.396 [2024-02-14 19:06:12.761880] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:35.396 [2024-02-14 19:06:12.762034] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58983 ] 00:05:35.655 [2024-02-14 19:06:12.905649] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.655 [2024-02-14 19:06:13.036956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.655 [2024-02-14 19:06:13.037049] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:37.031 [2024-02-14 19:06:14.121625] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:37.031 19:06:14 -- accel/accel.sh@18 -- # out=' 00:05:37.031 SPDK Configuration: 00:05:37.031 Core mask: 0x1 00:05:37.031 00:05:37.031 Accel Perf Configuration: 00:05:37.031 Workload Type: dualcast 00:05:37.031 Transfer size: 4096 bytes 00:05:37.031 Vector count 1 00:05:37.031 Module: software 00:05:37.031 Queue depth: 32 00:05:37.031 Allocate depth: 32 00:05:37.031 # threads/core: 1 00:05:37.031 Run time: 1 seconds 00:05:37.031 Verify: Yes 00:05:37.031 00:05:37.031 Running for 1 seconds... 00:05:37.031 00:05:37.031 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:37.031 ------------------------------------------------------------------------------------ 00:05:37.031 0,0 397696/s 1553 MiB/s 0 0 00:05:37.031 ==================================================================================== 00:05:37.031 Total 397696/s 1553 MiB/s 0 0' 00:05:37.031 19:06:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:37.031 19:06:14 -- accel/accel.sh@20 -- # IFS=: 00:05:37.031 19:06:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:37.031 19:06:14 -- accel/accel.sh@20 -- # read -r var val 00:05:37.031 19:06:14 -- accel/accel.sh@12 -- # build_accel_config 00:05:37.031 19:06:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:37.031 19:06:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.031 19:06:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.031 19:06:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:37.031 19:06:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:37.031 19:06:14 -- accel/accel.sh@41 -- # local IFS=, 00:05:37.031 19:06:14 -- accel/accel.sh@42 -- # jq -r . 00:05:37.031 [2024-02-14 19:06:14.418353] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:37.031 [2024-02-14 19:06:14.418454] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59003 ] 00:05:37.290 [2024-02-14 19:06:14.552905] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.290 [2024-02-14 19:06:14.677613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.290 [2024-02-14 19:06:14.677701] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:37.550 19:06:14 -- accel/accel.sh@21 -- # val= 00:05:37.550 19:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # IFS=: 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # read -r var val 00:05:37.550 19:06:14 -- accel/accel.sh@21 -- # val= 00:05:37.550 19:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # IFS=: 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # read -r var val 00:05:37.550 19:06:14 -- accel/accel.sh@21 -- # val=0x1 00:05:37.550 19:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # IFS=: 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # read -r var val 00:05:37.550 19:06:14 -- accel/accel.sh@21 -- # val= 00:05:37.550 19:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # IFS=: 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # read -r var val 00:05:37.550 19:06:14 -- accel/accel.sh@21 -- # val= 00:05:37.550 19:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # IFS=: 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # read -r var val 00:05:37.550 19:06:14 -- accel/accel.sh@21 -- # val=dualcast 00:05:37.550 19:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.550 19:06:14 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # IFS=: 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # read -r var val 00:05:37.550 19:06:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:37.550 19:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # IFS=: 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # read -r var val 00:05:37.550 19:06:14 -- accel/accel.sh@21 -- # val= 00:05:37.550 19:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # IFS=: 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # read -r var val 00:05:37.550 19:06:14 -- accel/accel.sh@21 -- # val=software 00:05:37.550 19:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.550 19:06:14 -- accel/accel.sh@23 -- # accel_module=software 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # IFS=: 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # read -r var val 00:05:37.550 19:06:14 -- accel/accel.sh@21 -- # val=32 00:05:37.550 19:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # IFS=: 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # read -r var val 00:05:37.550 19:06:14 -- accel/accel.sh@21 -- # val=32 00:05:37.550 19:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # IFS=: 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # read -r var val 00:05:37.550 19:06:14 -- accel/accel.sh@21 -- # val=1 00:05:37.550 19:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # IFS=: 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # read -r var val 00:05:37.550 19:06:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:37.550 19:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # IFS=: 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # read -r var val 00:05:37.550 19:06:14 -- accel/accel.sh@21 -- # val=Yes 00:05:37.550 19:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # IFS=: 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # read -r var val 00:05:37.550 19:06:14 -- accel/accel.sh@21 -- # val= 00:05:37.550 19:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # IFS=: 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # read -r var val 00:05:37.550 19:06:14 -- accel/accel.sh@21 -- # val= 00:05:37.550 19:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # IFS=: 00:05:37.550 19:06:14 -- accel/accel.sh@20 -- # read -r var val 00:05:38.485 [2024-02-14 19:06:15.758721] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:38.744 19:06:16 -- accel/accel.sh@21 -- # val= 00:05:38.744 19:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.744 19:06:16 -- accel/accel.sh@20 -- # IFS=: 00:05:38.744 19:06:16 -- accel/accel.sh@20 -- # read -r var val 00:05:38.744 19:06:16 -- accel/accel.sh@21 -- # val= 00:05:38.744 19:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.744 19:06:16 -- accel/accel.sh@20 -- # IFS=: 00:05:38.744 19:06:16 -- accel/accel.sh@20 -- # read -r var val 00:05:38.744 19:06:16 -- accel/accel.sh@21 -- # val= 00:05:38.744 19:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.744 19:06:16 -- accel/accel.sh@20 -- # IFS=: 00:05:38.744 19:06:16 -- accel/accel.sh@20 -- # read -r var val 00:05:38.744 19:06:16 -- accel/accel.sh@21 -- # val= 00:05:38.744 19:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.744 19:06:16 -- accel/accel.sh@20 -- # IFS=: 00:05:38.744 19:06:16 -- accel/accel.sh@20 -- # read -r var val 00:05:38.744 19:06:16 -- accel/accel.sh@21 -- # val= 00:05:38.744 19:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.744 19:06:16 -- accel/accel.sh@20 -- # IFS=: 00:05:38.744 19:06:16 -- accel/accel.sh@20 -- # read -r var val 00:05:38.744 19:06:16 -- accel/accel.sh@21 -- # val= 00:05:38.744 19:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.744 19:06:16 -- accel/accel.sh@20 -- # IFS=: 00:05:38.744 19:06:16 -- accel/accel.sh@20 -- # read -r var val 00:05:38.744 19:06:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:38.744 19:06:16 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:05:38.744 19:06:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.744 00:05:38.744 real 0m3.308s 00:05:38.744 user 0m2.785s 00:05:38.744 sys 0m0.315s 00:05:38.744 19:06:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.744 19:06:16 -- common/autotest_common.sh@10 -- # set +x 00:05:38.744 ************************************ 00:05:38.744 END TEST accel_dualcast 00:05:38.744 ************************************ 00:05:38.744 19:06:16 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:38.744 19:06:16 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:05:38.744 19:06:16 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:38.744 19:06:16 -- common/autotest_common.sh@10 -- # set +x 00:05:38.744 ************************************ 00:05:38.744 START TEST accel_compare 00:05:38.744 ************************************ 00:05:38.744 19:06:16 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w compare -y 00:05:38.744 19:06:16 -- accel/accel.sh@16 -- # local accel_opc 00:05:38.744 19:06:16 -- accel/accel.sh@17 -- # local accel_module 00:05:38.744 19:06:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:05:38.744 19:06:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:38.744 19:06:16 -- accel/accel.sh@12 -- # build_accel_config 00:05:38.744 19:06:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:38.744 19:06:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.744 19:06:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.744 19:06:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:38.744 19:06:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:38.744 19:06:16 -- accel/accel.sh@41 -- # local IFS=, 00:05:38.744 19:06:16 -- accel/accel.sh@42 -- # jq -r . 00:05:38.744 [2024-02-14 19:06:16.116766] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:38.744 [2024-02-14 19:06:16.116839] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59037 ] 00:05:39.003 [2024-02-14 19:06:16.245719] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.003 [2024-02-14 19:06:16.350246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.003 [2024-02-14 19:06:16.350330] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:40.385 [2024-02-14 19:06:17.428961] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:40.385 19:06:17 -- accel/accel.sh@18 -- # out=' 00:05:40.385 SPDK Configuration: 00:05:40.385 Core mask: 0x1 00:05:40.385 00:05:40.385 Accel Perf Configuration: 00:05:40.385 Workload Type: compare 00:05:40.385 Transfer size: 4096 bytes 00:05:40.385 Vector count 1 00:05:40.385 Module: software 00:05:40.385 Queue depth: 32 00:05:40.385 Allocate depth: 32 00:05:40.385 # threads/core: 1 00:05:40.385 Run time: 1 seconds 00:05:40.385 Verify: Yes 00:05:40.385 00:05:40.385 Running for 1 seconds... 00:05:40.385 00:05:40.385 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:40.385 ------------------------------------------------------------------------------------ 00:05:40.385 0,0 500256/s 1954 MiB/s 0 0 00:05:40.385 ==================================================================================== 00:05:40.385 Total 500256/s 1954 MiB/s 0 0' 00:05:40.385 19:06:17 -- accel/accel.sh@20 -- # IFS=: 00:05:40.385 19:06:17 -- accel/accel.sh@20 -- # read -r var val 00:05:40.385 19:06:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:40.385 19:06:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:40.385 19:06:17 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.385 19:06:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:40.385 19:06:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.385 19:06:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.385 19:06:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:40.385 19:06:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:40.385 19:06:17 -- accel/accel.sh@41 -- # local IFS=, 00:05:40.385 19:06:17 -- accel/accel.sh@42 -- # jq -r . 00:05:40.385 [2024-02-14 19:06:17.754843] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:40.385 [2024-02-14 19:06:17.754946] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59059 ] 00:05:40.644 [2024-02-14 19:06:17.892414] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.644 [2024-02-14 19:06:18.055622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.644 [2024-02-14 19:06:18.055732] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:40.903 19:06:18 -- accel/accel.sh@21 -- # val= 00:05:40.903 19:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # IFS=: 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # read -r var val 00:05:40.903 19:06:18 -- accel/accel.sh@21 -- # val= 00:05:40.903 19:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # IFS=: 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # read -r var val 00:05:40.903 19:06:18 -- accel/accel.sh@21 -- # val=0x1 00:05:40.903 19:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # IFS=: 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # read -r var val 00:05:40.903 19:06:18 -- accel/accel.sh@21 -- # val= 00:05:40.903 19:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # IFS=: 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # read -r var val 00:05:40.903 19:06:18 -- accel/accel.sh@21 -- # val= 00:05:40.903 19:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # IFS=: 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # read -r var val 00:05:40.903 19:06:18 -- accel/accel.sh@21 -- # val=compare 00:05:40.903 19:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.903 19:06:18 -- accel/accel.sh@24 -- # accel_opc=compare 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # IFS=: 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # read -r var val 00:05:40.903 19:06:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:40.903 19:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # IFS=: 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # read -r var val 00:05:40.903 19:06:18 -- accel/accel.sh@21 -- # val= 00:05:40.903 19:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # IFS=: 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # read -r var val 00:05:40.903 19:06:18 -- accel/accel.sh@21 -- # val=software 00:05:40.903 19:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.903 19:06:18 -- accel/accel.sh@23 -- # accel_module=software 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # IFS=: 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # read -r var val 00:05:40.903 19:06:18 -- accel/accel.sh@21 -- # val=32 00:05:40.903 19:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # IFS=: 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # read -r var val 00:05:40.903 19:06:18 -- accel/accel.sh@21 -- # val=32 00:05:40.903 19:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # IFS=: 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # read -r var val 00:05:40.903 19:06:18 -- accel/accel.sh@21 -- # val=1 00:05:40.903 19:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # IFS=: 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # read -r var val 00:05:40.903 19:06:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:40.903 19:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # IFS=: 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # read -r var val 00:05:40.903 19:06:18 -- accel/accel.sh@21 -- # val=Yes 00:05:40.903 19:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # IFS=: 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # read -r var val 00:05:40.903 19:06:18 -- accel/accel.sh@21 -- # val= 00:05:40.903 19:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # IFS=: 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # read -r var val 00:05:40.903 19:06:18 -- accel/accel.sh@21 -- # val= 00:05:40.903 19:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # IFS=: 00:05:40.903 19:06:18 -- accel/accel.sh@20 -- # read -r var val 00:05:41.840 [2024-02-14 19:06:19.145780] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:42.099 19:06:19 -- accel/accel.sh@21 -- # val= 00:05:42.099 19:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.099 19:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:42.099 19:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:42.099 19:06:19 -- accel/accel.sh@21 -- # val= 00:05:42.099 19:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.099 19:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:42.099 19:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:42.099 19:06:19 -- accel/accel.sh@21 -- # val= 00:05:42.099 19:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.099 19:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:42.099 19:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:42.099 19:06:19 -- accel/accel.sh@21 -- # val= 00:05:42.099 19:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.099 19:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:42.099 19:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:42.099 19:06:19 -- accel/accel.sh@21 -- # val= 00:05:42.099 19:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.099 19:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:42.099 19:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:42.099 19:06:19 -- accel/accel.sh@21 -- # val= 00:05:42.099 19:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.099 19:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:42.099 19:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:42.099 19:06:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:42.099 19:06:19 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:05:42.099 19:06:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.099 00:05:42.099 real 0m3.364s 00:05:42.099 user 0m2.834s 00:05:42.099 sys 0m0.322s 00:05:42.099 19:06:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.099 ************************************ 00:05:42.099 END TEST accel_compare 00:05:42.099 ************************************ 00:05:42.099 19:06:19 -- common/autotest_common.sh@10 -- # set +x 00:05:42.099 19:06:19 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:42.099 19:06:19 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:05:42.099 19:06:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:42.099 19:06:19 -- common/autotest_common.sh@10 -- # set +x 00:05:42.358 ************************************ 00:05:42.358 START TEST accel_xor 00:05:42.358 ************************************ 00:05:42.358 19:06:19 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w xor -y 00:05:42.358 19:06:19 -- accel/accel.sh@16 -- # local accel_opc 00:05:42.358 19:06:19 -- accel/accel.sh@17 -- # local accel_module 00:05:42.358 19:06:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:05:42.358 19:06:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:42.358 19:06:19 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.358 19:06:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:42.358 19:06:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.358 19:06:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.358 19:06:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:42.358 19:06:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:42.358 19:06:19 -- accel/accel.sh@41 -- # local IFS=, 00:05:42.358 19:06:19 -- accel/accel.sh@42 -- # jq -r . 00:05:42.358 [2024-02-14 19:06:19.541001] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:42.358 [2024-02-14 19:06:19.541110] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59099 ] 00:05:42.358 [2024-02-14 19:06:19.675924] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.616 [2024-02-14 19:06:19.837532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.616 [2024-02-14 19:06:19.837638] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:43.553 [2024-02-14 19:06:20.925958] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:43.812 19:06:21 -- accel/accel.sh@18 -- # out=' 00:05:43.812 SPDK Configuration: 00:05:43.812 Core mask: 0x1 00:05:43.812 00:05:43.812 Accel Perf Configuration: 00:05:43.812 Workload Type: xor 00:05:43.812 Source buffers: 2 00:05:43.812 Transfer size: 4096 bytes 00:05:43.812 Vector count 1 00:05:43.812 Module: software 00:05:43.812 Queue depth: 32 00:05:43.812 Allocate depth: 32 00:05:43.812 # threads/core: 1 00:05:43.812 Run time: 1 seconds 00:05:43.812 Verify: Yes 00:05:43.812 00:05:43.812 Running for 1 seconds... 00:05:43.812 00:05:43.812 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:43.812 ------------------------------------------------------------------------------------ 00:05:43.812 0,0 224960/s 878 MiB/s 0 0 00:05:43.812 ==================================================================================== 00:05:43.812 Total 224960/s 878 MiB/s 0 0' 00:05:43.812 19:06:21 -- accel/accel.sh@20 -- # IFS=: 00:05:43.812 19:06:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:43.812 19:06:21 -- accel/accel.sh@20 -- # read -r var val 00:05:43.812 19:06:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:43.812 19:06:21 -- accel/accel.sh@12 -- # build_accel_config 00:05:43.812 19:06:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:43.812 19:06:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.812 19:06:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.812 19:06:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:43.812 19:06:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:43.812 19:06:21 -- accel/accel.sh@41 -- # local IFS=, 00:05:43.812 19:06:21 -- accel/accel.sh@42 -- # jq -r . 00:05:44.071 [2024-02-14 19:06:21.242078] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:44.071 [2024-02-14 19:06:21.242197] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59118 ] 00:05:44.071 [2024-02-14 19:06:21.379722] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.331 [2024-02-14 19:06:21.535957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.331 [2024-02-14 19:06:21.536056] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:44.331 19:06:21 -- accel/accel.sh@21 -- # val= 00:05:44.331 19:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # IFS=: 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # read -r var val 00:05:44.331 19:06:21 -- accel/accel.sh@21 -- # val= 00:05:44.331 19:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # IFS=: 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # read -r var val 00:05:44.331 19:06:21 -- accel/accel.sh@21 -- # val=0x1 00:05:44.331 19:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # IFS=: 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # read -r var val 00:05:44.331 19:06:21 -- accel/accel.sh@21 -- # val= 00:05:44.331 19:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # IFS=: 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # read -r var val 00:05:44.331 19:06:21 -- accel/accel.sh@21 -- # val= 00:05:44.331 19:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # IFS=: 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # read -r var val 00:05:44.331 19:06:21 -- accel/accel.sh@21 -- # val=xor 00:05:44.331 19:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.331 19:06:21 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # IFS=: 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # read -r var val 00:05:44.331 19:06:21 -- accel/accel.sh@21 -- # val=2 00:05:44.331 19:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # IFS=: 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # read -r var val 00:05:44.331 19:06:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:44.331 19:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # IFS=: 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # read -r var val 00:05:44.331 19:06:21 -- accel/accel.sh@21 -- # val= 00:05:44.331 19:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # IFS=: 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # read -r var val 00:05:44.331 19:06:21 -- accel/accel.sh@21 -- # val=software 00:05:44.331 19:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.331 19:06:21 -- accel/accel.sh@23 -- # accel_module=software 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # IFS=: 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # read -r var val 00:05:44.331 19:06:21 -- accel/accel.sh@21 -- # val=32 00:05:44.331 19:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # IFS=: 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # read -r var val 00:05:44.331 19:06:21 -- accel/accel.sh@21 -- # val=32 00:05:44.331 19:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # IFS=: 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # read -r var val 00:05:44.331 19:06:21 -- accel/accel.sh@21 -- # val=1 00:05:44.331 19:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # IFS=: 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # read -r var val 00:05:44.331 19:06:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:44.331 19:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # IFS=: 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # read -r var val 00:05:44.331 19:06:21 -- accel/accel.sh@21 -- # val=Yes 00:05:44.331 19:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # IFS=: 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # read -r var val 00:05:44.331 19:06:21 -- accel/accel.sh@21 -- # val= 00:05:44.331 19:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # IFS=: 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # read -r var val 00:05:44.331 19:06:21 -- accel/accel.sh@21 -- # val= 00:05:44.331 19:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # IFS=: 00:05:44.331 19:06:21 -- accel/accel.sh@20 -- # read -r var val 00:05:45.268 [2024-02-14 19:06:22.621574] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:45.527 19:06:22 -- accel/accel.sh@21 -- # val= 00:05:45.527 19:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.527 19:06:22 -- accel/accel.sh@20 -- # IFS=: 00:05:45.527 19:06:22 -- accel/accel.sh@20 -- # read -r var val 00:05:45.527 19:06:22 -- accel/accel.sh@21 -- # val= 00:05:45.527 19:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.527 19:06:22 -- accel/accel.sh@20 -- # IFS=: 00:05:45.527 19:06:22 -- accel/accel.sh@20 -- # read -r var val 00:05:45.527 19:06:22 -- accel/accel.sh@21 -- # val= 00:05:45.527 19:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.527 19:06:22 -- accel/accel.sh@20 -- # IFS=: 00:05:45.527 19:06:22 -- accel/accel.sh@20 -- # read -r var val 00:05:45.527 19:06:22 -- accel/accel.sh@21 -- # val= 00:05:45.527 19:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.527 19:06:22 -- accel/accel.sh@20 -- # IFS=: 00:05:45.527 19:06:22 -- accel/accel.sh@20 -- # read -r var val 00:05:45.527 19:06:22 -- accel/accel.sh@21 -- # val= 00:05:45.527 19:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.527 19:06:22 -- accel/accel.sh@20 -- # IFS=: 00:05:45.527 19:06:22 -- accel/accel.sh@20 -- # read -r var val 00:05:45.527 19:06:22 -- accel/accel.sh@21 -- # val= 00:05:45.527 19:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.527 19:06:22 -- accel/accel.sh@20 -- # IFS=: 00:05:45.527 19:06:22 -- accel/accel.sh@20 -- # read -r var val 00:05:45.527 19:06:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:45.527 19:06:22 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:45.527 19:06:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.527 00:05:45.527 real 0m3.415s 00:05:45.527 user 0m2.895s 00:05:45.527 sys 0m0.315s 00:05:45.527 19:06:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.527 ************************************ 00:05:45.527 END TEST accel_xor 00:05:45.527 ************************************ 00:05:45.527 19:06:22 -- common/autotest_common.sh@10 -- # set +x 00:05:45.786 19:06:22 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:45.786 19:06:22 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:05:45.786 19:06:22 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:45.786 19:06:22 -- common/autotest_common.sh@10 -- # set +x 00:05:45.786 ************************************ 00:05:45.786 START TEST accel_xor 00:05:45.786 ************************************ 00:05:45.786 19:06:22 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w xor -y -x 3 00:05:45.786 19:06:22 -- accel/accel.sh@16 -- # local accel_opc 00:05:45.786 19:06:22 -- accel/accel.sh@17 -- # local accel_module 00:05:45.786 19:06:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:05:45.786 19:06:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:45.786 19:06:22 -- accel/accel.sh@12 -- # build_accel_config 00:05:45.786 19:06:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:45.786 19:06:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.786 19:06:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.786 19:06:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:45.786 19:06:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:45.786 19:06:22 -- accel/accel.sh@41 -- # local IFS=, 00:05:45.786 19:06:22 -- accel/accel.sh@42 -- # jq -r . 00:05:45.786 [2024-02-14 19:06:23.010118] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:45.786 [2024-02-14 19:06:23.010265] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59153 ] 00:05:45.786 [2024-02-14 19:06:23.159679] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.059 [2024-02-14 19:06:23.312876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.059 [2024-02-14 19:06:23.312976] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:46.999 [2024-02-14 19:06:24.400838] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:47.566 19:06:24 -- accel/accel.sh@18 -- # out=' 00:05:47.566 SPDK Configuration: 00:05:47.566 Core mask: 0x1 00:05:47.566 00:05:47.566 Accel Perf Configuration: 00:05:47.566 Workload Type: xor 00:05:47.566 Source buffers: 3 00:05:47.566 Transfer size: 4096 bytes 00:05:47.566 Vector count 1 00:05:47.566 Module: software 00:05:47.566 Queue depth: 32 00:05:47.566 Allocate depth: 32 00:05:47.566 # threads/core: 1 00:05:47.566 Run time: 1 seconds 00:05:47.566 Verify: Yes 00:05:47.566 00:05:47.566 Running for 1 seconds... 00:05:47.566 00:05:47.566 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:47.566 ------------------------------------------------------------------------------------ 00:05:47.566 0,0 223040/s 871 MiB/s 0 0 00:05:47.566 ==================================================================================== 00:05:47.566 Total 223040/s 871 MiB/s 0 0' 00:05:47.566 19:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:47.566 19:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:47.566 19:06:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:47.566 19:06:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:47.566 19:06:24 -- accel/accel.sh@12 -- # build_accel_config 00:05:47.566 19:06:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:47.566 19:06:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.566 19:06:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.566 19:06:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:47.566 19:06:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:47.566 19:06:24 -- accel/accel.sh@41 -- # local IFS=, 00:05:47.566 19:06:24 -- accel/accel.sh@42 -- # jq -r . 00:05:47.566 [2024-02-14 19:06:24.725969] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:47.566 [2024-02-14 19:06:24.726109] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59178 ] 00:05:47.566 [2024-02-14 19:06:24.872216] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.826 [2024-02-14 19:06:25.006009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.826 [2024-02-14 19:06:25.006128] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:47.826 19:06:25 -- accel/accel.sh@21 -- # val= 00:05:47.826 19:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:47.826 19:06:25 -- accel/accel.sh@21 -- # val= 00:05:47.826 19:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:47.826 19:06:25 -- accel/accel.sh@21 -- # val=0x1 00:05:47.826 19:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:47.826 19:06:25 -- accel/accel.sh@21 -- # val= 00:05:47.826 19:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:47.826 19:06:25 -- accel/accel.sh@21 -- # val= 00:05:47.826 19:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:47.826 19:06:25 -- accel/accel.sh@21 -- # val=xor 00:05:47.826 19:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.826 19:06:25 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:47.826 19:06:25 -- accel/accel.sh@21 -- # val=3 00:05:47.826 19:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:47.826 19:06:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:47.826 19:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:47.826 19:06:25 -- accel/accel.sh@21 -- # val= 00:05:47.826 19:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:47.826 19:06:25 -- accel/accel.sh@21 -- # val=software 00:05:47.826 19:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.826 19:06:25 -- accel/accel.sh@23 -- # accel_module=software 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:47.826 19:06:25 -- accel/accel.sh@21 -- # val=32 00:05:47.826 19:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:47.826 19:06:25 -- accel/accel.sh@21 -- # val=32 00:05:47.826 19:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:47.826 19:06:25 -- accel/accel.sh@21 -- # val=1 00:05:47.826 19:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:47.826 19:06:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:47.826 19:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:47.826 19:06:25 -- accel/accel.sh@21 -- # val=Yes 00:05:47.826 19:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:47.826 19:06:25 -- accel/accel.sh@21 -- # val= 00:05:47.826 19:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:47.826 19:06:25 -- accel/accel.sh@21 -- # val= 00:05:47.826 19:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:47.826 19:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:48.762 [2024-02-14 19:06:26.097409] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:49.020 19:06:26 -- accel/accel.sh@21 -- # val= 00:05:49.020 19:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.020 19:06:26 -- accel/accel.sh@20 -- # IFS=: 00:05:49.021 19:06:26 -- accel/accel.sh@20 -- # read -r var val 00:05:49.021 19:06:26 -- accel/accel.sh@21 -- # val= 00:05:49.021 19:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.021 19:06:26 -- accel/accel.sh@20 -- # IFS=: 00:05:49.021 19:06:26 -- accel/accel.sh@20 -- # read -r var val 00:05:49.021 19:06:26 -- accel/accel.sh@21 -- # val= 00:05:49.021 19:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.021 19:06:26 -- accel/accel.sh@20 -- # IFS=: 00:05:49.021 19:06:26 -- accel/accel.sh@20 -- # read -r var val 00:05:49.021 19:06:26 -- accel/accel.sh@21 -- # val= 00:05:49.021 19:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.021 19:06:26 -- accel/accel.sh@20 -- # IFS=: 00:05:49.021 19:06:26 -- accel/accel.sh@20 -- # read -r var val 00:05:49.021 19:06:26 -- accel/accel.sh@21 -- # val= 00:05:49.021 19:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.021 19:06:26 -- accel/accel.sh@20 -- # IFS=: 00:05:49.021 19:06:26 -- accel/accel.sh@20 -- # read -r var val 00:05:49.021 19:06:26 -- accel/accel.sh@21 -- # val= 00:05:49.021 19:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.021 19:06:26 -- accel/accel.sh@20 -- # IFS=: 00:05:49.021 19:06:26 -- accel/accel.sh@20 -- # read -r var val 00:05:49.021 19:06:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:49.021 19:06:26 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:49.021 19:06:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.021 00:05:49.021 real 0m3.372s 00:05:49.021 user 0m2.829s 00:05:49.021 sys 0m0.340s 00:05:49.021 19:06:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.021 19:06:26 -- common/autotest_common.sh@10 -- # set +x 00:05:49.021 ************************************ 00:05:49.021 END TEST accel_xor 00:05:49.021 ************************************ 00:05:49.021 19:06:26 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:49.021 19:06:26 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:05:49.021 19:06:26 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:49.021 19:06:26 -- common/autotest_common.sh@10 -- # set +x 00:05:49.021 ************************************ 00:05:49.021 START TEST accel_dif_verify 00:05:49.021 ************************************ 00:05:49.021 19:06:26 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w dif_verify 00:05:49.021 19:06:26 -- accel/accel.sh@16 -- # local accel_opc 00:05:49.021 19:06:26 -- accel/accel.sh@17 -- # local accel_module 00:05:49.021 19:06:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:05:49.021 19:06:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:49.021 19:06:26 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.021 19:06:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:49.021 19:06:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.021 19:06:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.021 19:06:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:49.021 19:06:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:49.021 19:06:26 -- accel/accel.sh@41 -- # local IFS=, 00:05:49.021 19:06:26 -- accel/accel.sh@42 -- # jq -r . 00:05:49.021 [2024-02-14 19:06:26.421347] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:49.021 [2024-02-14 19:06:26.421452] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59211 ] 00:05:49.280 [2024-02-14 19:06:26.559816] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.280 [2024-02-14 19:06:26.690414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.280 [2024-02-14 19:06:26.690522] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:50.680 [2024-02-14 19:06:27.752068] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:50.680 19:06:27 -- accel/accel.sh@18 -- # out=' 00:05:50.680 SPDK Configuration: 00:05:50.680 Core mask: 0x1 00:05:50.680 00:05:50.680 Accel Perf Configuration: 00:05:50.680 Workload Type: dif_verify 00:05:50.680 Vector size: 4096 bytes 00:05:50.680 Transfer size: 4096 bytes 00:05:50.680 Block size: 512 bytes 00:05:50.680 Metadata size: 8 bytes 00:05:50.680 Vector count 1 00:05:50.680 Module: software 00:05:50.680 Queue depth: 32 00:05:50.680 Allocate depth: 32 00:05:50.680 # threads/core: 1 00:05:50.680 Run time: 1 seconds 00:05:50.680 Verify: No 00:05:50.680 00:05:50.680 Running for 1 seconds... 00:05:50.680 00:05:50.680 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:50.680 ------------------------------------------------------------------------------------ 00:05:50.680 0,0 98400/s 390 MiB/s 0 0 00:05:50.680 ==================================================================================== 00:05:50.680 Total 98400/s 384 MiB/s 0 0' 00:05:50.680 19:06:27 -- accel/accel.sh@20 -- # IFS=: 00:05:50.680 19:06:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:50.680 19:06:27 -- accel/accel.sh@20 -- # read -r var val 00:05:50.680 19:06:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:50.680 19:06:27 -- accel/accel.sh@12 -- # build_accel_config 00:05:50.680 19:06:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:50.680 19:06:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.680 19:06:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.680 19:06:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:50.680 19:06:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:50.680 19:06:27 -- accel/accel.sh@41 -- # local IFS=, 00:05:50.680 19:06:27 -- accel/accel.sh@42 -- # jq -r . 00:05:50.680 [2024-02-14 19:06:27.969871] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:50.680 [2024-02-14 19:06:27.969971] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59232 ] 00:05:50.939 [2024-02-14 19:06:28.107248] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.939 [2024-02-14 19:06:28.273153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.939 [2024-02-14 19:06:28.273251] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:50.939 19:06:28 -- accel/accel.sh@21 -- # val= 00:05:50.939 19:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.939 19:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:50.939 19:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:50.939 19:06:28 -- accel/accel.sh@21 -- # val= 00:05:50.939 19:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.939 19:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:50.939 19:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:50.939 19:06:28 -- accel/accel.sh@21 -- # val=0x1 00:05:50.939 19:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.939 19:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:50.939 19:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:50.939 19:06:28 -- accel/accel.sh@21 -- # val= 00:05:50.939 19:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.939 19:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:50.939 19:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:50.939 19:06:28 -- accel/accel.sh@21 -- # val= 00:05:50.939 19:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.939 19:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:50.939 19:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:50.939 19:06:28 -- accel/accel.sh@21 -- # val=dif_verify 00:05:50.939 19:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.939 19:06:28 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:05:50.939 19:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:50.939 19:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:50.939 19:06:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:50.939 19:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.939 19:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:50.939 19:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:50.939 19:06:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:51.197 19:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.197 19:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:51.197 19:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:51.197 19:06:28 -- accel/accel.sh@21 -- # val='512 bytes' 00:05:51.197 19:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.197 19:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:51.197 19:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:51.197 19:06:28 -- accel/accel.sh@21 -- # val='8 bytes' 00:05:51.197 19:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.197 19:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:51.197 19:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:51.197 19:06:28 -- accel/accel.sh@21 -- # val= 00:05:51.197 19:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.197 19:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:51.197 19:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:51.197 19:06:28 -- accel/accel.sh@21 -- # val=software 00:05:51.197 19:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.197 19:06:28 -- accel/accel.sh@23 -- # accel_module=software 00:05:51.197 19:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:51.197 19:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:51.197 19:06:28 -- accel/accel.sh@21 -- # val=32 00:05:51.197 19:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.197 19:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:51.197 19:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:51.197 19:06:28 -- accel/accel.sh@21 -- # val=32 00:05:51.197 19:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.197 19:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:51.197 19:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:51.197 19:06:28 -- accel/accel.sh@21 -- # val=1 00:05:51.197 19:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.197 19:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:51.197 19:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:51.197 19:06:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:51.197 19:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.197 19:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:51.197 19:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:51.198 19:06:28 -- accel/accel.sh@21 -- # val=No 00:05:51.198 19:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.198 19:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:51.198 19:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:51.198 19:06:28 -- accel/accel.sh@21 -- # val= 00:05:51.198 19:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.198 19:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:51.198 19:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:51.198 19:06:28 -- accel/accel.sh@21 -- # val= 00:05:51.198 19:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.198 19:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:51.198 19:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:52.133 [2024-02-14 19:06:29.350581] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:52.392 19:06:29 -- accel/accel.sh@21 -- # val= 00:05:52.392 19:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.392 19:06:29 -- accel/accel.sh@20 -- # IFS=: 00:05:52.392 19:06:29 -- accel/accel.sh@20 -- # read -r var val 00:05:52.392 19:06:29 -- accel/accel.sh@21 -- # val= 00:05:52.392 19:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.392 19:06:29 -- accel/accel.sh@20 -- # IFS=: 00:05:52.392 19:06:29 -- accel/accel.sh@20 -- # read -r var val 00:05:52.392 19:06:29 -- accel/accel.sh@21 -- # val= 00:05:52.392 19:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.392 19:06:29 -- accel/accel.sh@20 -- # IFS=: 00:05:52.392 19:06:29 -- accel/accel.sh@20 -- # read -r var val 00:05:52.392 19:06:29 -- accel/accel.sh@21 -- # val= 00:05:52.392 19:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.393 19:06:29 -- accel/accel.sh@20 -- # IFS=: 00:05:52.393 19:06:29 -- accel/accel.sh@20 -- # read -r var val 00:05:52.393 19:06:29 -- accel/accel.sh@21 -- # val= 00:05:52.393 19:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.393 19:06:29 -- accel/accel.sh@20 -- # IFS=: 00:05:52.393 19:06:29 -- accel/accel.sh@20 -- # read -r var val 00:05:52.393 19:06:29 -- accel/accel.sh@21 -- # val= 00:05:52.393 19:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.393 19:06:29 -- accel/accel.sh@20 -- # IFS=: 00:05:52.393 19:06:29 -- accel/accel.sh@20 -- # read -r var val 00:05:52.393 19:06:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:52.393 19:06:29 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:05:52.393 19:06:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.393 00:05:52.393 real 0m3.230s 00:05:52.393 user 0m2.770s 00:05:52.393 sys 0m0.258s 00:05:52.393 19:06:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:52.393 19:06:29 -- common/autotest_common.sh@10 -- # set +x 00:05:52.393 ************************************ 00:05:52.393 END TEST accel_dif_verify 00:05:52.393 ************************************ 00:05:52.393 19:06:29 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:52.393 19:06:29 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:05:52.393 19:06:29 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:52.393 19:06:29 -- common/autotest_common.sh@10 -- # set +x 00:05:52.393 ************************************ 00:05:52.393 START TEST accel_dif_generate 00:05:52.393 ************************************ 00:05:52.393 19:06:29 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w dif_generate 00:05:52.393 19:06:29 -- accel/accel.sh@16 -- # local accel_opc 00:05:52.393 19:06:29 -- accel/accel.sh@17 -- # local accel_module 00:05:52.393 19:06:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:05:52.393 19:06:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:52.393 19:06:29 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.393 19:06:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:52.393 19:06:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.393 19:06:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.393 19:06:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:52.393 19:06:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:52.393 19:06:29 -- accel/accel.sh@41 -- # local IFS=, 00:05:52.393 19:06:29 -- accel/accel.sh@42 -- # jq -r . 00:05:52.393 [2024-02-14 19:06:29.704625] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:52.393 [2024-02-14 19:06:29.704762] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59272 ] 00:05:52.652 [2024-02-14 19:06:29.840231] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.652 [2024-02-14 19:06:29.991689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.652 [2024-02-14 19:06:29.991809] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:54.029 [2024-02-14 19:06:31.076782] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:54.029 19:06:31 -- accel/accel.sh@18 -- # out=' 00:05:54.029 SPDK Configuration: 00:05:54.029 Core mask: 0x1 00:05:54.029 00:05:54.029 Accel Perf Configuration: 00:05:54.029 Workload Type: dif_generate 00:05:54.029 Vector size: 4096 bytes 00:05:54.029 Transfer size: 4096 bytes 00:05:54.029 Block size: 512 bytes 00:05:54.029 Metadata size: 8 bytes 00:05:54.029 Vector count 1 00:05:54.029 Module: software 00:05:54.029 Queue depth: 32 00:05:54.029 Allocate depth: 32 00:05:54.029 # threads/core: 1 00:05:54.029 Run time: 1 seconds 00:05:54.029 Verify: No 00:05:54.029 00:05:54.029 Running for 1 seconds... 00:05:54.029 00:05:54.029 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:54.029 ------------------------------------------------------------------------------------ 00:05:54.029 0,0 134272/s 532 MiB/s 0 0 00:05:54.029 ==================================================================================== 00:05:54.029 Total 134272/s 524 MiB/s 0 0' 00:05:54.029 19:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.029 19:06:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:54.029 19:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.029 19:06:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:54.029 19:06:31 -- accel/accel.sh@12 -- # build_accel_config 00:05:54.029 19:06:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:54.029 19:06:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.029 19:06:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.029 19:06:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:54.029 19:06:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:54.029 19:06:31 -- accel/accel.sh@41 -- # local IFS=, 00:05:54.029 19:06:31 -- accel/accel.sh@42 -- # jq -r . 00:05:54.029 [2024-02-14 19:06:31.390882] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:54.029 [2024-02-14 19:06:31.390987] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59286 ] 00:05:54.288 [2024-02-14 19:06:31.527410] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.288 [2024-02-14 19:06:31.672022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.288 [2024-02-14 19:06:31.672140] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:54.546 19:06:31 -- accel/accel.sh@21 -- # val= 00:05:54.547 19:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.547 19:06:31 -- accel/accel.sh@21 -- # val= 00:05:54.547 19:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.547 19:06:31 -- accel/accel.sh@21 -- # val=0x1 00:05:54.547 19:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.547 19:06:31 -- accel/accel.sh@21 -- # val= 00:05:54.547 19:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.547 19:06:31 -- accel/accel.sh@21 -- # val= 00:05:54.547 19:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.547 19:06:31 -- accel/accel.sh@21 -- # val=dif_generate 00:05:54.547 19:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.547 19:06:31 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.547 19:06:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:54.547 19:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.547 19:06:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:54.547 19:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.547 19:06:31 -- accel/accel.sh@21 -- # val='512 bytes' 00:05:54.547 19:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.547 19:06:31 -- accel/accel.sh@21 -- # val='8 bytes' 00:05:54.547 19:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.547 19:06:31 -- accel/accel.sh@21 -- # val= 00:05:54.547 19:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.547 19:06:31 -- accel/accel.sh@21 -- # val=software 00:05:54.547 19:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.547 19:06:31 -- accel/accel.sh@23 -- # accel_module=software 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.547 19:06:31 -- accel/accel.sh@21 -- # val=32 00:05:54.547 19:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.547 19:06:31 -- accel/accel.sh@21 -- # val=32 00:05:54.547 19:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.547 19:06:31 -- accel/accel.sh@21 -- # val=1 00:05:54.547 19:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.547 19:06:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:54.547 19:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.547 19:06:31 -- accel/accel.sh@21 -- # val=No 00:05:54.547 19:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.547 19:06:31 -- accel/accel.sh@21 -- # val= 00:05:54.547 19:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:54.547 19:06:31 -- accel/accel.sh@21 -- # val= 00:05:54.547 19:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:54.547 19:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:55.484 [2024-02-14 19:06:32.753089] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:55.743 19:06:33 -- accel/accel.sh@21 -- # val= 00:05:55.743 19:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.743 19:06:33 -- accel/accel.sh@20 -- # IFS=: 00:05:55.743 19:06:33 -- accel/accel.sh@20 -- # read -r var val 00:05:55.743 19:06:33 -- accel/accel.sh@21 -- # val= 00:05:55.743 19:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.743 19:06:33 -- accel/accel.sh@20 -- # IFS=: 00:05:55.743 19:06:33 -- accel/accel.sh@20 -- # read -r var val 00:05:55.743 19:06:33 -- accel/accel.sh@21 -- # val= 00:05:55.743 19:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.743 19:06:33 -- accel/accel.sh@20 -- # IFS=: 00:05:55.743 19:06:33 -- accel/accel.sh@20 -- # read -r var val 00:05:55.743 19:06:33 -- accel/accel.sh@21 -- # val= 00:05:55.743 19:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.743 19:06:33 -- accel/accel.sh@20 -- # IFS=: 00:05:55.743 19:06:33 -- accel/accel.sh@20 -- # read -r var val 00:05:55.743 19:06:33 -- accel/accel.sh@21 -- # val= 00:05:55.743 19:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.743 19:06:33 -- accel/accel.sh@20 -- # IFS=: 00:05:55.743 19:06:33 -- accel/accel.sh@20 -- # read -r var val 00:05:55.743 19:06:33 -- accel/accel.sh@21 -- # val= 00:05:55.743 19:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.743 19:06:33 -- accel/accel.sh@20 -- # IFS=: 00:05:55.743 19:06:33 -- accel/accel.sh@20 -- # read -r var val 00:05:55.743 19:06:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:55.743 19:06:33 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:05:55.743 19:06:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.743 00:05:55.743 real 0m3.353s 00:05:55.743 user 0m2.846s 00:05:55.743 sys 0m0.306s 00:05:55.743 19:06:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:55.743 19:06:33 -- common/autotest_common.sh@10 -- # set +x 00:05:55.743 ************************************ 00:05:55.743 END TEST accel_dif_generate 00:05:55.743 ************************************ 00:05:55.743 19:06:33 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:55.743 19:06:33 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:05:55.743 19:06:33 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:55.743 19:06:33 -- common/autotest_common.sh@10 -- # set +x 00:05:55.743 ************************************ 00:05:55.743 START TEST accel_dif_generate_copy 00:05:55.743 ************************************ 00:05:55.743 19:06:33 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w dif_generate_copy 00:05:55.743 19:06:33 -- accel/accel.sh@16 -- # local accel_opc 00:05:55.743 19:06:33 -- accel/accel.sh@17 -- # local accel_module 00:05:55.743 19:06:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:05:55.743 19:06:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:55.743 19:06:33 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.743 19:06:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:55.743 19:06:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.743 19:06:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.743 19:06:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:55.743 19:06:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:55.743 19:06:33 -- accel/accel.sh@41 -- # local IFS=, 00:05:55.743 19:06:33 -- accel/accel.sh@42 -- # jq -r . 00:05:55.743 [2024-02-14 19:06:33.106711] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:55.743 [2024-02-14 19:06:33.106816] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59326 ] 00:05:56.003 [2024-02-14 19:06:33.243258] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.003 [2024-02-14 19:06:33.397212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.003 [2024-02-14 19:06:33.397311] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:57.380 [2024-02-14 19:06:34.484648] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:57.380 19:06:34 -- accel/accel.sh@18 -- # out=' 00:05:57.380 SPDK Configuration: 00:05:57.380 Core mask: 0x1 00:05:57.380 00:05:57.380 Accel Perf Configuration: 00:05:57.380 Workload Type: dif_generate_copy 00:05:57.380 Vector size: 4096 bytes 00:05:57.380 Transfer size: 4096 bytes 00:05:57.380 Vector count 1 00:05:57.380 Module: software 00:05:57.380 Queue depth: 32 00:05:57.380 Allocate depth: 32 00:05:57.380 # threads/core: 1 00:05:57.380 Run time: 1 seconds 00:05:57.380 Verify: No 00:05:57.380 00:05:57.380 Running for 1 seconds... 00:05:57.380 00:05:57.380 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:57.380 ------------------------------------------------------------------------------------ 00:05:57.380 0,0 102848/s 408 MiB/s 0 0 00:05:57.380 ==================================================================================== 00:05:57.380 Total 102848/s 401 MiB/s 0 0' 00:05:57.380 19:06:34 -- accel/accel.sh@20 -- # IFS=: 00:05:57.380 19:06:34 -- accel/accel.sh@20 -- # read -r var val 00:05:57.380 19:06:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:57.380 19:06:34 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.380 19:06:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:57.380 19:06:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:57.380 19:06:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.380 19:06:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.380 19:06:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:57.380 19:06:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:57.380 19:06:34 -- accel/accel.sh@41 -- # local IFS=, 00:05:57.380 19:06:34 -- accel/accel.sh@42 -- # jq -r . 00:05:57.380 [2024-02-14 19:06:34.782908] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:57.380 [2024-02-14 19:06:34.783014] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59346 ] 00:05:57.639 [2024-02-14 19:06:34.919011] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.899 [2024-02-14 19:06:35.061211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.899 [2024-02-14 19:06:35.061309] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:57.899 19:06:35 -- accel/accel.sh@21 -- # val= 00:05:57.899 19:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.899 19:06:35 -- accel/accel.sh@21 -- # val= 00:05:57.899 19:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.899 19:06:35 -- accel/accel.sh@21 -- # val=0x1 00:05:57.899 19:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.899 19:06:35 -- accel/accel.sh@21 -- # val= 00:05:57.899 19:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.899 19:06:35 -- accel/accel.sh@21 -- # val= 00:05:57.899 19:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.899 19:06:35 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:05:57.899 19:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.899 19:06:35 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.899 19:06:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:57.899 19:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.899 19:06:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:57.899 19:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.899 19:06:35 -- accel/accel.sh@21 -- # val= 00:05:57.899 19:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.899 19:06:35 -- accel/accel.sh@21 -- # val=software 00:05:57.899 19:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.899 19:06:35 -- accel/accel.sh@23 -- # accel_module=software 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.899 19:06:35 -- accel/accel.sh@21 -- # val=32 00:05:57.899 19:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.899 19:06:35 -- accel/accel.sh@21 -- # val=32 00:05:57.899 19:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.899 19:06:35 -- accel/accel.sh@21 -- # val=1 00:05:57.899 19:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.899 19:06:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:57.899 19:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.899 19:06:35 -- accel/accel.sh@21 -- # val=No 00:05:57.899 19:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.899 19:06:35 -- accel/accel.sh@21 -- # val= 00:05:57.899 19:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:57.899 19:06:35 -- accel/accel.sh@21 -- # val= 00:05:57.899 19:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:57.899 19:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:58.836 [2024-02-14 19:06:36.145527] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:59.096 19:06:36 -- accel/accel.sh@21 -- # val= 00:05:59.096 19:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.096 19:06:36 -- accel/accel.sh@20 -- # IFS=: 00:05:59.096 19:06:36 -- accel/accel.sh@20 -- # read -r var val 00:05:59.096 19:06:36 -- accel/accel.sh@21 -- # val= 00:05:59.096 19:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.096 19:06:36 -- accel/accel.sh@20 -- # IFS=: 00:05:59.096 19:06:36 -- accel/accel.sh@20 -- # read -r var val 00:05:59.096 19:06:36 -- accel/accel.sh@21 -- # val= 00:05:59.096 19:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.096 19:06:36 -- accel/accel.sh@20 -- # IFS=: 00:05:59.096 19:06:36 -- accel/accel.sh@20 -- # read -r var val 00:05:59.096 19:06:36 -- accel/accel.sh@21 -- # val= 00:05:59.096 19:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.096 19:06:36 -- accel/accel.sh@20 -- # IFS=: 00:05:59.096 19:06:36 -- accel/accel.sh@20 -- # read -r var val 00:05:59.096 19:06:36 -- accel/accel.sh@21 -- # val= 00:05:59.096 19:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.096 19:06:36 -- accel/accel.sh@20 -- # IFS=: 00:05:59.096 19:06:36 -- accel/accel.sh@20 -- # read -r var val 00:05:59.096 19:06:36 -- accel/accel.sh@21 -- # val= 00:05:59.096 19:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.096 19:06:36 -- accel/accel.sh@20 -- # IFS=: 00:05:59.096 19:06:36 -- accel/accel.sh@20 -- # read -r var val 00:05:59.096 19:06:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:59.096 19:06:36 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:05:59.096 19:06:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.096 00:05:59.096 real 0m3.348s 00:05:59.096 user 0m2.838s 00:05:59.096 sys 0m0.306s 00:05:59.096 19:06:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:59.096 19:06:36 -- common/autotest_common.sh@10 -- # set +x 00:05:59.096 ************************************ 00:05:59.096 END TEST accel_dif_generate_copy 00:05:59.096 ************************************ 00:05:59.096 19:06:36 -- accel/accel.sh@107 -- # [[ y == y ]] 00:05:59.096 19:06:36 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:59.096 19:06:36 -- common/autotest_common.sh@1075 -- # '[' 8 -le 1 ']' 00:05:59.096 19:06:36 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:59.096 19:06:36 -- common/autotest_common.sh@10 -- # set +x 00:05:59.096 ************************************ 00:05:59.096 START TEST accel_comp 00:05:59.096 ************************************ 00:05:59.096 19:06:36 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:59.096 19:06:36 -- accel/accel.sh@16 -- # local accel_opc 00:05:59.096 19:06:36 -- accel/accel.sh@17 -- # local accel_module 00:05:59.096 19:06:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:59.096 19:06:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:59.096 19:06:36 -- accel/accel.sh@12 -- # build_accel_config 00:05:59.096 19:06:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:59.096 19:06:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.096 19:06:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.096 19:06:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:59.096 19:06:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:59.096 19:06:36 -- accel/accel.sh@41 -- # local IFS=, 00:05:59.096 19:06:36 -- accel/accel.sh@42 -- # jq -r . 00:05:59.096 [2024-02-14 19:06:36.503332] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:59.096 [2024-02-14 19:06:36.503417] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59380 ] 00:05:59.355 [2024-02-14 19:06:36.632855] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.355 [2024-02-14 19:06:36.759960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.355 [2024-02-14 19:06:36.760071] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:00.733 [2024-02-14 19:06:37.839504] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:00.733 19:06:38 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:00.733 00:06:00.733 SPDK Configuration: 00:06:00.733 Core mask: 0x1 00:06:00.733 00:06:00.733 Accel Perf Configuration: 00:06:00.733 Workload Type: compress 00:06:00.733 Transfer size: 4096 bytes 00:06:00.733 Vector count 1 00:06:00.733 Module: software 00:06:00.733 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:00.733 Queue depth: 32 00:06:00.733 Allocate depth: 32 00:06:00.733 # threads/core: 1 00:06:00.733 Run time: 1 seconds 00:06:00.733 Verify: No 00:06:00.733 00:06:00.733 Running for 1 seconds... 00:06:00.733 00:06:00.733 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:00.733 ------------------------------------------------------------------------------------ 00:06:00.733 0,0 57376/s 239 MiB/s 0 0 00:06:00.733 ==================================================================================== 00:06:00.733 Total 57376/s 224 MiB/s 0 0' 00:06:00.733 19:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:00.733 19:06:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:00.733 19:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:00.733 19:06:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:00.733 19:06:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.733 19:06:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:00.733 19:06:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.734 19:06:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.734 19:06:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:00.734 19:06:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:00.734 19:06:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:00.734 19:06:38 -- accel/accel.sh@42 -- # jq -r . 00:06:00.734 [2024-02-14 19:06:38.132470] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:00.734 [2024-02-14 19:06:38.132593] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59405 ] 00:06:00.992 [2024-02-14 19:06:38.268933] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.251 [2024-02-14 19:06:38.425646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.252 [2024-02-14 19:06:38.425779] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:01.252 19:06:38 -- accel/accel.sh@21 -- # val= 00:06:01.252 19:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:01.252 19:06:38 -- accel/accel.sh@21 -- # val= 00:06:01.252 19:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:01.252 19:06:38 -- accel/accel.sh@21 -- # val= 00:06:01.252 19:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:01.252 19:06:38 -- accel/accel.sh@21 -- # val=0x1 00:06:01.252 19:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:01.252 19:06:38 -- accel/accel.sh@21 -- # val= 00:06:01.252 19:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:01.252 19:06:38 -- accel/accel.sh@21 -- # val= 00:06:01.252 19:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:01.252 19:06:38 -- accel/accel.sh@21 -- # val=compress 00:06:01.252 19:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.252 19:06:38 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:01.252 19:06:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:01.252 19:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:01.252 19:06:38 -- accel/accel.sh@21 -- # val= 00:06:01.252 19:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:01.252 19:06:38 -- accel/accel.sh@21 -- # val=software 00:06:01.252 19:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.252 19:06:38 -- accel/accel.sh@23 -- # accel_module=software 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:01.252 19:06:38 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:01.252 19:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:01.252 19:06:38 -- accel/accel.sh@21 -- # val=32 00:06:01.252 19:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:01.252 19:06:38 -- accel/accel.sh@21 -- # val=32 00:06:01.252 19:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:01.252 19:06:38 -- accel/accel.sh@21 -- # val=1 00:06:01.252 19:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:01.252 19:06:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:01.252 19:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:01.252 19:06:38 -- accel/accel.sh@21 -- # val=No 00:06:01.252 19:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:01.252 19:06:38 -- accel/accel.sh@21 -- # val= 00:06:01.252 19:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:01.252 19:06:38 -- accel/accel.sh@21 -- # val= 00:06:01.252 19:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:01.252 19:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:02.189 [2024-02-14 19:06:39.511268] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:02.448 19:06:39 -- accel/accel.sh@21 -- # val= 00:06:02.448 19:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.448 19:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:02.448 19:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:02.448 19:06:39 -- accel/accel.sh@21 -- # val= 00:06:02.448 19:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.448 19:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:02.448 19:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:02.448 19:06:39 -- accel/accel.sh@21 -- # val= 00:06:02.448 19:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.448 19:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:02.448 19:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:02.448 19:06:39 -- accel/accel.sh@21 -- # val= 00:06:02.448 19:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.448 19:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:02.448 19:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:02.448 19:06:39 -- accel/accel.sh@21 -- # val= 00:06:02.448 19:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.448 19:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:02.448 19:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:02.448 19:06:39 -- accel/accel.sh@21 -- # val= 00:06:02.448 19:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.448 19:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:02.448 19:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:02.448 19:06:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:02.448 19:06:39 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:02.448 19:06:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.448 00:06:02.448 real 0m3.295s 00:06:02.448 user 0m2.804s 00:06:02.448 sys 0m0.288s 00:06:02.448 19:06:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:02.448 19:06:39 -- common/autotest_common.sh@10 -- # set +x 00:06:02.448 ************************************ 00:06:02.448 END TEST accel_comp 00:06:02.448 ************************************ 00:06:02.448 19:06:39 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:02.448 19:06:39 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:06:02.448 19:06:39 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:02.448 19:06:39 -- common/autotest_common.sh@10 -- # set +x 00:06:02.448 ************************************ 00:06:02.448 START TEST accel_decomp 00:06:02.448 ************************************ 00:06:02.448 19:06:39 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:02.448 19:06:39 -- accel/accel.sh@16 -- # local accel_opc 00:06:02.448 19:06:39 -- accel/accel.sh@17 -- # local accel_module 00:06:02.448 19:06:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:02.448 19:06:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:02.448 19:06:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.448 19:06:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:02.448 19:06:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.448 19:06:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.448 19:06:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:02.448 19:06:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:02.448 19:06:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:02.448 19:06:39 -- accel/accel.sh@42 -- # jq -r . 00:06:02.448 [2024-02-14 19:06:39.851392] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:02.448 [2024-02-14 19:06:39.851577] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59440 ] 00:06:02.707 [2024-02-14 19:06:39.999392] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.966 [2024-02-14 19:06:40.156477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.966 [2024-02-14 19:06:40.156607] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:03.904 [2024-02-14 19:06:41.239240] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:04.163 19:06:41 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:04.163 00:06:04.163 SPDK Configuration: 00:06:04.163 Core mask: 0x1 00:06:04.163 00:06:04.163 Accel Perf Configuration: 00:06:04.163 Workload Type: decompress 00:06:04.163 Transfer size: 4096 bytes 00:06:04.163 Vector count 1 00:06:04.163 Module: software 00:06:04.163 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:04.163 Queue depth: 32 00:06:04.163 Allocate depth: 32 00:06:04.163 # threads/core: 1 00:06:04.163 Run time: 1 seconds 00:06:04.163 Verify: Yes 00:06:04.163 00:06:04.163 Running for 1 seconds... 00:06:04.163 00:06:04.163 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:04.163 ------------------------------------------------------------------------------------ 00:06:04.163 0,0 81536/s 150 MiB/s 0 0 00:06:04.163 ==================================================================================== 00:06:04.163 Total 81536/s 318 MiB/s 0 0' 00:06:04.163 19:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:04.163 19:06:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:04.163 19:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:04.163 19:06:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:04.163 19:06:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.163 19:06:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:04.163 19:06:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.163 19:06:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.163 19:06:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:04.163 19:06:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:04.163 19:06:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:04.163 19:06:41 -- accel/accel.sh@42 -- # jq -r . 00:06:04.163 [2024-02-14 19:06:41.518457] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:04.163 [2024-02-14 19:06:41.518573] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59459 ] 00:06:04.422 [2024-02-14 19:06:41.655857] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.422 [2024-02-14 19:06:41.743607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.422 [2024-02-14 19:06:41.743698] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:04.422 19:06:41 -- accel/accel.sh@21 -- # val= 00:06:04.422 19:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:04.422 19:06:41 -- accel/accel.sh@21 -- # val= 00:06:04.422 19:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:04.422 19:06:41 -- accel/accel.sh@21 -- # val= 00:06:04.422 19:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:04.422 19:06:41 -- accel/accel.sh@21 -- # val=0x1 00:06:04.422 19:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:04.422 19:06:41 -- accel/accel.sh@21 -- # val= 00:06:04.422 19:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:04.422 19:06:41 -- accel/accel.sh@21 -- # val= 00:06:04.422 19:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:04.422 19:06:41 -- accel/accel.sh@21 -- # val=decompress 00:06:04.422 19:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.422 19:06:41 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:04.422 19:06:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:04.422 19:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:04.422 19:06:41 -- accel/accel.sh@21 -- # val= 00:06:04.422 19:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:04.422 19:06:41 -- accel/accel.sh@21 -- # val=software 00:06:04.422 19:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.422 19:06:41 -- accel/accel.sh@23 -- # accel_module=software 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:04.422 19:06:41 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:04.422 19:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:04.422 19:06:41 -- accel/accel.sh@21 -- # val=32 00:06:04.422 19:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:04.422 19:06:41 -- accel/accel.sh@21 -- # val=32 00:06:04.422 19:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:04.422 19:06:41 -- accel/accel.sh@21 -- # val=1 00:06:04.422 19:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:04.422 19:06:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:04.422 19:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:04.422 19:06:41 -- accel/accel.sh@21 -- # val=Yes 00:06:04.422 19:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:04.422 19:06:41 -- accel/accel.sh@21 -- # val= 00:06:04.422 19:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:04.422 19:06:41 -- accel/accel.sh@21 -- # val= 00:06:04.422 19:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:04.422 19:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:05.799 [2024-02-14 19:06:42.821271] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:05.799 19:06:43 -- accel/accel.sh@21 -- # val= 00:06:05.799 19:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.799 19:06:43 -- accel/accel.sh@20 -- # IFS=: 00:06:05.799 19:06:43 -- accel/accel.sh@20 -- # read -r var val 00:06:05.799 19:06:43 -- accel/accel.sh@21 -- # val= 00:06:05.799 19:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.799 19:06:43 -- accel/accel.sh@20 -- # IFS=: 00:06:05.799 19:06:43 -- accel/accel.sh@20 -- # read -r var val 00:06:05.799 19:06:43 -- accel/accel.sh@21 -- # val= 00:06:05.799 19:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.799 19:06:43 -- accel/accel.sh@20 -- # IFS=: 00:06:05.799 19:06:43 -- accel/accel.sh@20 -- # read -r var val 00:06:05.799 19:06:43 -- accel/accel.sh@21 -- # val= 00:06:05.799 19:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.799 19:06:43 -- accel/accel.sh@20 -- # IFS=: 00:06:05.799 19:06:43 -- accel/accel.sh@20 -- # read -r var val 00:06:05.799 ************************************ 00:06:05.799 END TEST accel_decomp 00:06:05.799 ************************************ 00:06:05.799 19:06:43 -- accel/accel.sh@21 -- # val= 00:06:05.799 19:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.799 19:06:43 -- accel/accel.sh@20 -- # IFS=: 00:06:05.799 19:06:43 -- accel/accel.sh@20 -- # read -r var val 00:06:05.799 19:06:43 -- accel/accel.sh@21 -- # val= 00:06:05.799 19:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.799 19:06:43 -- accel/accel.sh@20 -- # IFS=: 00:06:05.799 19:06:43 -- accel/accel.sh@20 -- # read -r var val 00:06:05.799 19:06:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:05.799 19:06:43 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:05.800 19:06:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.800 00:06:05.800 real 0m3.254s 00:06:05.800 user 0m2.755s 00:06:05.800 sys 0m0.296s 00:06:05.800 19:06:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.800 19:06:43 -- common/autotest_common.sh@10 -- # set +x 00:06:05.800 19:06:43 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:05.800 19:06:43 -- common/autotest_common.sh@1075 -- # '[' 11 -le 1 ']' 00:06:05.800 19:06:43 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:05.800 19:06:43 -- common/autotest_common.sh@10 -- # set +x 00:06:05.800 ************************************ 00:06:05.800 START TEST accel_decmop_full 00:06:05.800 ************************************ 00:06:05.800 19:06:43 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:05.800 19:06:43 -- accel/accel.sh@16 -- # local accel_opc 00:06:05.800 19:06:43 -- accel/accel.sh@17 -- # local accel_module 00:06:05.800 19:06:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:05.800 19:06:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:05.800 19:06:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.800 19:06:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:05.800 19:06:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.800 19:06:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.800 19:06:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:05.800 19:06:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:05.800 19:06:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:05.800 19:06:43 -- accel/accel.sh@42 -- # jq -r . 00:06:05.800 [2024-02-14 19:06:43.158176] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:05.800 [2024-02-14 19:06:43.158269] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59494 ] 00:06:06.059 [2024-02-14 19:06:43.295924] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.059 [2024-02-14 19:06:43.405557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.059 [2024-02-14 19:06:43.405653] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:07.434 [2024-02-14 19:06:44.499849] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:07.434 19:06:44 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:07.434 00:06:07.434 SPDK Configuration: 00:06:07.434 Core mask: 0x1 00:06:07.434 00:06:07.434 Accel Perf Configuration: 00:06:07.434 Workload Type: decompress 00:06:07.434 Transfer size: 111250 bytes 00:06:07.434 Vector count 1 00:06:07.434 Module: software 00:06:07.434 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:07.434 Queue depth: 32 00:06:07.434 Allocate depth: 32 00:06:07.434 # threads/core: 1 00:06:07.434 Run time: 1 seconds 00:06:07.434 Verify: Yes 00:06:07.434 00:06:07.434 Running for 1 seconds... 00:06:07.434 00:06:07.435 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:07.435 ------------------------------------------------------------------------------------ 00:06:07.435 0,0 5344/s 220 MiB/s 0 0 00:06:07.435 ==================================================================================== 00:06:07.435 Total 5344/s 566 MiB/s 0 0' 00:06:07.435 19:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:07.435 19:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:07.435 19:06:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:07.435 19:06:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:07.435 19:06:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.435 19:06:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:07.435 19:06:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.435 19:06:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.435 19:06:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:07.435 19:06:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:07.435 19:06:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:07.435 19:06:44 -- accel/accel.sh@42 -- # jq -r . 00:06:07.435 [2024-02-14 19:06:44.782410] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:07.435 [2024-02-14 19:06:44.782519] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59513 ] 00:06:07.693 [2024-02-14 19:06:44.919166] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.693 [2024-02-14 19:06:45.032828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.693 [2024-02-14 19:06:45.032926] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:07.952 19:06:45 -- accel/accel.sh@21 -- # val= 00:06:07.952 19:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.952 19:06:45 -- accel/accel.sh@20 -- # IFS=: 00:06:07.952 19:06:45 -- accel/accel.sh@20 -- # read -r var val 00:06:07.952 19:06:45 -- accel/accel.sh@21 -- # val= 00:06:07.952 19:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.952 19:06:45 -- accel/accel.sh@20 -- # IFS=: 00:06:07.952 19:06:45 -- accel/accel.sh@20 -- # read -r var val 00:06:07.952 19:06:45 -- accel/accel.sh@21 -- # val= 00:06:07.952 19:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.952 19:06:45 -- accel/accel.sh@20 -- # IFS=: 00:06:07.952 19:06:45 -- accel/accel.sh@20 -- # read -r var val 00:06:07.952 19:06:45 -- accel/accel.sh@21 -- # val=0x1 00:06:07.952 19:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.952 19:06:45 -- accel/accel.sh@20 -- # IFS=: 00:06:07.952 19:06:45 -- accel/accel.sh@20 -- # read -r var val 00:06:07.952 19:06:45 -- accel/accel.sh@21 -- # val= 00:06:07.952 19:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.952 19:06:45 -- accel/accel.sh@20 -- # IFS=: 00:06:07.952 19:06:45 -- accel/accel.sh@20 -- # read -r var val 00:06:07.952 19:06:45 -- accel/accel.sh@21 -- # val= 00:06:07.952 19:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.952 19:06:45 -- accel/accel.sh@20 -- # IFS=: 00:06:07.952 19:06:45 -- accel/accel.sh@20 -- # read -r var val 00:06:07.952 19:06:45 -- accel/accel.sh@21 -- # val=decompress 00:06:07.952 19:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.952 19:06:45 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:07.952 19:06:45 -- accel/accel.sh@20 -- # IFS=: 00:06:07.952 19:06:45 -- accel/accel.sh@20 -- # read -r var val 00:06:07.952 19:06:45 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:07.952 19:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.952 19:06:45 -- accel/accel.sh@20 -- # IFS=: 00:06:07.952 19:06:45 -- accel/accel.sh@20 -- # read -r var val 00:06:07.952 19:06:45 -- accel/accel.sh@21 -- # val= 00:06:07.952 19:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.952 19:06:45 -- accel/accel.sh@20 -- # IFS=: 00:06:07.952 19:06:45 -- accel/accel.sh@20 -- # read -r var val 00:06:07.952 19:06:45 -- accel/accel.sh@21 -- # val=software 00:06:07.952 19:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.952 19:06:45 -- accel/accel.sh@23 -- # accel_module=software 00:06:07.952 19:06:45 -- accel/accel.sh@20 -- # IFS=: 00:06:07.952 19:06:45 -- accel/accel.sh@20 -- # read -r var val 00:06:07.952 19:06:45 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:07.952 19:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.953 19:06:45 -- accel/accel.sh@20 -- # IFS=: 00:06:07.953 19:06:45 -- accel/accel.sh@20 -- # read -r var val 00:06:07.953 19:06:45 -- accel/accel.sh@21 -- # val=32 00:06:07.953 19:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.953 19:06:45 -- accel/accel.sh@20 -- # IFS=: 00:06:07.953 19:06:45 -- accel/accel.sh@20 -- # read -r var val 00:06:07.953 19:06:45 -- accel/accel.sh@21 -- # val=32 00:06:07.953 19:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.953 19:06:45 -- accel/accel.sh@20 -- # IFS=: 00:06:07.953 19:06:45 -- accel/accel.sh@20 -- # read -r var val 00:06:07.953 19:06:45 -- accel/accel.sh@21 -- # val=1 00:06:07.953 19:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.953 19:06:45 -- accel/accel.sh@20 -- # IFS=: 00:06:07.953 19:06:45 -- accel/accel.sh@20 -- # read -r var val 00:06:07.953 19:06:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:07.953 19:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.953 19:06:45 -- accel/accel.sh@20 -- # IFS=: 00:06:07.953 19:06:45 -- accel/accel.sh@20 -- # read -r var val 00:06:07.953 19:06:45 -- accel/accel.sh@21 -- # val=Yes 00:06:07.953 19:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.953 19:06:45 -- accel/accel.sh@20 -- # IFS=: 00:06:07.953 19:06:45 -- accel/accel.sh@20 -- # read -r var val 00:06:07.953 19:06:45 -- accel/accel.sh@21 -- # val= 00:06:07.953 19:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.953 19:06:45 -- accel/accel.sh@20 -- # IFS=: 00:06:07.953 19:06:45 -- accel/accel.sh@20 -- # read -r var val 00:06:07.953 19:06:45 -- accel/accel.sh@21 -- # val= 00:06:07.953 19:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.953 19:06:45 -- accel/accel.sh@20 -- # IFS=: 00:06:07.953 19:06:45 -- accel/accel.sh@20 -- # read -r var val 00:06:08.889 [2024-02-14 19:06:46.119056] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:09.148 19:06:46 -- accel/accel.sh@21 -- # val= 00:06:09.148 19:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.148 19:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:09.148 19:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:09.148 19:06:46 -- accel/accel.sh@21 -- # val= 00:06:09.148 19:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.148 19:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:09.148 19:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:09.148 19:06:46 -- accel/accel.sh@21 -- # val= 00:06:09.148 19:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.148 19:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:09.148 19:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:09.148 19:06:46 -- accel/accel.sh@21 -- # val= 00:06:09.148 19:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.148 19:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:09.148 19:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:09.148 19:06:46 -- accel/accel.sh@21 -- # val= 00:06:09.148 19:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.148 19:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:09.148 19:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:09.148 19:06:46 -- accel/accel.sh@21 -- # val= 00:06:09.148 19:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.148 19:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:09.148 19:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:09.148 19:06:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:09.148 19:06:46 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:09.148 19:06:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.148 00:06:09.148 real 0m3.240s 00:06:09.148 user 0m2.745s 00:06:09.148 sys 0m0.292s 00:06:09.148 19:06:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:09.148 19:06:46 -- common/autotest_common.sh@10 -- # set +x 00:06:09.148 ************************************ 00:06:09.148 END TEST accel_decmop_full 00:06:09.148 ************************************ 00:06:09.148 19:06:46 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:09.148 19:06:46 -- common/autotest_common.sh@1075 -- # '[' 11 -le 1 ']' 00:06:09.148 19:06:46 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:09.148 19:06:46 -- common/autotest_common.sh@10 -- # set +x 00:06:09.148 ************************************ 00:06:09.148 START TEST accel_decomp_mcore 00:06:09.148 ************************************ 00:06:09.148 19:06:46 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:09.148 19:06:46 -- accel/accel.sh@16 -- # local accel_opc 00:06:09.148 19:06:46 -- accel/accel.sh@17 -- # local accel_module 00:06:09.148 19:06:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:09.148 19:06:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:09.148 19:06:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.148 19:06:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.148 19:06:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.148 19:06:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.148 19:06:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.148 19:06:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.148 19:06:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.148 19:06:46 -- accel/accel.sh@42 -- # jq -r . 00:06:09.148 [2024-02-14 19:06:46.444994] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:09.148 [2024-02-14 19:06:46.445549] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59550 ] 00:06:09.423 [2024-02-14 19:06:46.582353] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:09.423 [2024-02-14 19:06:46.706694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.423 [2024-02-14 19:06:46.706849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.423 [2024-02-14 19:06:46.707237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.423 [2024-02-14 19:06:46.707279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.423 [2024-02-14 19:06:46.707743] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:10.805 [2024-02-14 19:06:47.805098] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:10.805 19:06:48 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:10.805 00:06:10.805 SPDK Configuration: 00:06:10.805 Core mask: 0xf 00:06:10.805 00:06:10.805 Accel Perf Configuration: 00:06:10.805 Workload Type: decompress 00:06:10.805 Transfer size: 4096 bytes 00:06:10.805 Vector count 1 00:06:10.805 Module: software 00:06:10.805 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:10.805 Queue depth: 32 00:06:10.805 Allocate depth: 32 00:06:10.805 # threads/core: 1 00:06:10.805 Run time: 1 seconds 00:06:10.805 Verify: Yes 00:06:10.805 00:06:10.805 Running for 1 seconds... 00:06:10.805 00:06:10.805 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:10.805 ------------------------------------------------------------------------------------ 00:06:10.805 0,0 48992/s 90 MiB/s 0 0 00:06:10.805 3,0 45952/s 84 MiB/s 0 0 00:06:10.805 2,0 43648/s 80 MiB/s 0 0 00:06:10.805 1,0 45536/s 83 MiB/s 0 0 00:06:10.805 ==================================================================================== 00:06:10.805 Total 184128/s 719 MiB/s 0 0' 00:06:10.805 19:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:10.805 19:06:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:10.805 19:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:10.805 19:06:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:10.805 19:06:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.805 19:06:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.805 19:06:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.805 19:06:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.805 19:06:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.805 19:06:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.805 19:06:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.805 19:06:48 -- accel/accel.sh@42 -- # jq -r . 00:06:10.805 [2024-02-14 19:06:48.096356] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:10.805 [2024-02-14 19:06:48.096982] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59577 ] 00:06:11.064 [2024-02-14 19:06:48.232124] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:11.064 [2024-02-14 19:06:48.333093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.064 [2024-02-14 19:06:48.333289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.064 [2024-02-14 19:06:48.333773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:11.064 [2024-02-14 19:06:48.333824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.064 [2024-02-14 19:06:48.334043] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:11.064 19:06:48 -- accel/accel.sh@21 -- # val= 00:06:11.064 19:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.064 19:06:48 -- accel/accel.sh@21 -- # val= 00:06:11.064 19:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.064 19:06:48 -- accel/accel.sh@21 -- # val= 00:06:11.064 19:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.064 19:06:48 -- accel/accel.sh@21 -- # val=0xf 00:06:11.064 19:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.064 19:06:48 -- accel/accel.sh@21 -- # val= 00:06:11.064 19:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.064 19:06:48 -- accel/accel.sh@21 -- # val= 00:06:11.064 19:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.064 19:06:48 -- accel/accel.sh@21 -- # val=decompress 00:06:11.064 19:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.064 19:06:48 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.064 19:06:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:11.064 19:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.064 19:06:48 -- accel/accel.sh@21 -- # val= 00:06:11.064 19:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.064 19:06:48 -- accel/accel.sh@21 -- # val=software 00:06:11.064 19:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.064 19:06:48 -- accel/accel.sh@23 -- # accel_module=software 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.064 19:06:48 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:11.064 19:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.064 19:06:48 -- accel/accel.sh@21 -- # val=32 00:06:11.064 19:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.064 19:06:48 -- accel/accel.sh@21 -- # val=32 00:06:11.064 19:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.064 19:06:48 -- accel/accel.sh@21 -- # val=1 00:06:11.064 19:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.064 19:06:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:11.064 19:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.064 19:06:48 -- accel/accel.sh@21 -- # val=Yes 00:06:11.064 19:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.064 19:06:48 -- accel/accel.sh@21 -- # val= 00:06:11.064 19:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:11.064 19:06:48 -- accel/accel.sh@21 -- # val= 00:06:11.064 19:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:11.064 19:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:12.440 [2024-02-14 19:06:49.424052] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:12.440 19:06:49 -- accel/accel.sh@21 -- # val= 00:06:12.440 19:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.440 19:06:49 -- accel/accel.sh@20 -- # IFS=: 00:06:12.440 19:06:49 -- accel/accel.sh@20 -- # read -r var val 00:06:12.440 19:06:49 -- accel/accel.sh@21 -- # val= 00:06:12.440 19:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.440 19:06:49 -- accel/accel.sh@20 -- # IFS=: 00:06:12.440 19:06:49 -- accel/accel.sh@20 -- # read -r var val 00:06:12.440 19:06:49 -- accel/accel.sh@21 -- # val= 00:06:12.440 19:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.440 19:06:49 -- accel/accel.sh@20 -- # IFS=: 00:06:12.440 19:06:49 -- accel/accel.sh@20 -- # read -r var val 00:06:12.440 19:06:49 -- accel/accel.sh@21 -- # val= 00:06:12.440 19:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.440 19:06:49 -- accel/accel.sh@20 -- # IFS=: 00:06:12.440 19:06:49 -- accel/accel.sh@20 -- # read -r var val 00:06:12.440 19:06:49 -- accel/accel.sh@21 -- # val= 00:06:12.440 19:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.440 19:06:49 -- accel/accel.sh@20 -- # IFS=: 00:06:12.440 19:06:49 -- accel/accel.sh@20 -- # read -r var val 00:06:12.440 19:06:49 -- accel/accel.sh@21 -- # val= 00:06:12.440 19:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.440 19:06:49 -- accel/accel.sh@20 -- # IFS=: 00:06:12.440 19:06:49 -- accel/accel.sh@20 -- # read -r var val 00:06:12.440 19:06:49 -- accel/accel.sh@21 -- # val= 00:06:12.440 19:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.440 19:06:49 -- accel/accel.sh@20 -- # IFS=: 00:06:12.440 19:06:49 -- accel/accel.sh@20 -- # read -r var val 00:06:12.440 19:06:49 -- accel/accel.sh@21 -- # val= 00:06:12.440 19:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.440 19:06:49 -- accel/accel.sh@20 -- # IFS=: 00:06:12.440 19:06:49 -- accel/accel.sh@20 -- # read -r var val 00:06:12.440 19:06:49 -- accel/accel.sh@21 -- # val= 00:06:12.440 19:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.440 19:06:49 -- accel/accel.sh@20 -- # IFS=: 00:06:12.440 19:06:49 -- accel/accel.sh@20 -- # read -r var val 00:06:12.440 19:06:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:12.440 19:06:49 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:12.440 19:06:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.440 00:06:12.440 real 0m3.283s 00:06:12.440 user 0m4.997s 00:06:12.440 sys 0m0.181s 00:06:12.440 19:06:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.440 ************************************ 00:06:12.440 END TEST accel_decomp_mcore 00:06:12.440 19:06:49 -- common/autotest_common.sh@10 -- # set +x 00:06:12.440 ************************************ 00:06:12.441 19:06:49 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:12.441 19:06:49 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:06:12.441 19:06:49 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:12.441 19:06:49 -- common/autotest_common.sh@10 -- # set +x 00:06:12.441 ************************************ 00:06:12.441 START TEST accel_decomp_full_mcore 00:06:12.441 ************************************ 00:06:12.441 19:06:49 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:12.441 19:06:49 -- accel/accel.sh@16 -- # local accel_opc 00:06:12.441 19:06:49 -- accel/accel.sh@17 -- # local accel_module 00:06:12.441 19:06:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:12.441 19:06:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:12.441 19:06:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.441 19:06:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.441 19:06:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.441 19:06:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.441 19:06:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.441 19:06:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.441 19:06:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.441 19:06:49 -- accel/accel.sh@42 -- # jq -r . 00:06:12.441 [2024-02-14 19:06:49.773431] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:12.441 [2024-02-14 19:06:49.773523] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59614 ] 00:06:12.699 [2024-02-14 19:06:49.908134] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:12.700 [2024-02-14 19:06:50.060862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.700 [2024-02-14 19:06:50.060992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.700 [2024-02-14 19:06:50.061121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.700 [2024-02-14 19:06:50.061121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.700 [2024-02-14 19:06:50.062050] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:14.075 [2024-02-14 19:06:51.164973] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:14.075 19:06:51 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:14.075 00:06:14.075 SPDK Configuration: 00:06:14.075 Core mask: 0xf 00:06:14.075 00:06:14.075 Accel Perf Configuration: 00:06:14.075 Workload Type: decompress 00:06:14.075 Transfer size: 111250 bytes 00:06:14.075 Vector count 1 00:06:14.075 Module: software 00:06:14.075 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:14.075 Queue depth: 32 00:06:14.075 Allocate depth: 32 00:06:14.075 # threads/core: 1 00:06:14.075 Run time: 1 seconds 00:06:14.075 Verify: Yes 00:06:14.075 00:06:14.075 Running for 1 seconds... 00:06:14.075 00:06:14.075 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:14.075 ------------------------------------------------------------------------------------ 00:06:14.075 0,0 4576/s 189 MiB/s 0 0 00:06:14.075 3,0 4352/s 179 MiB/s 0 0 00:06:14.075 2,0 4480/s 185 MiB/s 0 0 00:06:14.075 1,0 4480/s 185 MiB/s 0 0 00:06:14.075 ==================================================================================== 00:06:14.075 Total 17888/s 1897 MiB/s 0 0' 00:06:14.075 19:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.075 19:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.075 19:06:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:14.075 19:06:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:14.075 19:06:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.075 19:06:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.075 19:06:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.075 19:06:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.075 19:06:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.075 19:06:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.075 19:06:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.076 19:06:51 -- accel/accel.sh@42 -- # jq -r . 00:06:14.076 [2024-02-14 19:06:51.469992] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:14.076 [2024-02-14 19:06:51.470120] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59637 ] 00:06:14.334 [2024-02-14 19:06:51.609076] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:14.595 [2024-02-14 19:06:51.768949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.595 [2024-02-14 19:06:51.769112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.595 [2024-02-14 19:06:51.769256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.595 [2024-02-14 19:06:51.769265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.595 [2024-02-14 19:06:51.769451] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:14.595 19:06:51 -- accel/accel.sh@21 -- # val= 00:06:14.595 19:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.595 19:06:51 -- accel/accel.sh@21 -- # val= 00:06:14.595 19:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.595 19:06:51 -- accel/accel.sh@21 -- # val= 00:06:14.595 19:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.595 19:06:51 -- accel/accel.sh@21 -- # val=0xf 00:06:14.595 19:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.595 19:06:51 -- accel/accel.sh@21 -- # val= 00:06:14.595 19:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.595 19:06:51 -- accel/accel.sh@21 -- # val= 00:06:14.595 19:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.595 19:06:51 -- accel/accel.sh@21 -- # val=decompress 00:06:14.595 19:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.595 19:06:51 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.595 19:06:51 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:14.595 19:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.595 19:06:51 -- accel/accel.sh@21 -- # val= 00:06:14.595 19:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.595 19:06:51 -- accel/accel.sh@21 -- # val=software 00:06:14.595 19:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.595 19:06:51 -- accel/accel.sh@23 -- # accel_module=software 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.595 19:06:51 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:14.595 19:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.595 19:06:51 -- accel/accel.sh@21 -- # val=32 00:06:14.595 19:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.595 19:06:51 -- accel/accel.sh@21 -- # val=32 00:06:14.595 19:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.595 19:06:51 -- accel/accel.sh@21 -- # val=1 00:06:14.595 19:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.595 19:06:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:14.595 19:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.595 19:06:51 -- accel/accel.sh@21 -- # val=Yes 00:06:14.595 19:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.595 19:06:51 -- accel/accel.sh@21 -- # val= 00:06:14.595 19:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:14.595 19:06:51 -- accel/accel.sh@21 -- # val= 00:06:14.595 19:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:14.595 19:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:15.529 [2024-02-14 19:06:52.873288] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:15.787 19:06:53 -- accel/accel.sh@21 -- # val= 00:06:15.787 19:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.787 19:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:15.787 19:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:15.787 19:06:53 -- accel/accel.sh@21 -- # val= 00:06:15.787 19:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.787 19:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:15.787 19:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:15.787 19:06:53 -- accel/accel.sh@21 -- # val= 00:06:15.787 19:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.787 19:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:15.787 19:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:15.787 19:06:53 -- accel/accel.sh@21 -- # val= 00:06:15.787 19:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.787 19:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:15.787 19:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:15.787 19:06:53 -- accel/accel.sh@21 -- # val= 00:06:15.787 19:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.787 19:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:15.787 19:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:15.787 19:06:53 -- accel/accel.sh@21 -- # val= 00:06:15.787 19:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.787 19:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:15.787 19:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:15.787 19:06:53 -- accel/accel.sh@21 -- # val= 00:06:15.787 19:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.787 19:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:15.787 19:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:15.787 19:06:53 -- accel/accel.sh@21 -- # val= 00:06:15.787 19:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.787 19:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:15.787 19:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:15.787 19:06:53 -- accel/accel.sh@21 -- # val= 00:06:15.787 19:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.787 19:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:15.787 19:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:15.787 19:06:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:15.787 19:06:53 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:15.787 19:06:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.787 00:06:15.787 real 0m3.410s 00:06:15.787 user 0m5.075s 00:06:15.787 sys 0m0.162s 00:06:15.787 19:06:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.787 19:06:53 -- common/autotest_common.sh@10 -- # set +x 00:06:15.787 ************************************ 00:06:15.787 END TEST accel_decomp_full_mcore 00:06:15.787 ************************************ 00:06:15.787 19:06:53 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:15.788 19:06:53 -- common/autotest_common.sh@1075 -- # '[' 11 -le 1 ']' 00:06:15.788 19:06:53 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:15.788 19:06:53 -- common/autotest_common.sh@10 -- # set +x 00:06:16.046 ************************************ 00:06:16.046 START TEST accel_decomp_mthread 00:06:16.046 ************************************ 00:06:16.046 19:06:53 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:16.046 19:06:53 -- accel/accel.sh@16 -- # local accel_opc 00:06:16.046 19:06:53 -- accel/accel.sh@17 -- # local accel_module 00:06:16.046 19:06:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:16.046 19:06:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:16.046 19:06:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.046 19:06:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.046 19:06:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.046 19:06:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.046 19:06:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.046 19:06:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.046 19:06:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.046 19:06:53 -- accel/accel.sh@42 -- # jq -r . 00:06:16.046 [2024-02-14 19:06:53.234853] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:16.046 [2024-02-14 19:06:53.234973] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59680 ] 00:06:16.046 [2024-02-14 19:06:53.368793] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.305 [2024-02-14 19:06:53.522743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.305 [2024-02-14 19:06:53.522873] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:17.241 [2024-02-14 19:06:54.612652] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:17.500 19:06:54 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:17.500 00:06:17.500 SPDK Configuration: 00:06:17.500 Core mask: 0x1 00:06:17.500 00:06:17.500 Accel Perf Configuration: 00:06:17.500 Workload Type: decompress 00:06:17.500 Transfer size: 4096 bytes 00:06:17.500 Vector count 1 00:06:17.500 Module: software 00:06:17.500 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:17.500 Queue depth: 32 00:06:17.500 Allocate depth: 32 00:06:17.500 # threads/core: 2 00:06:17.500 Run time: 1 seconds 00:06:17.500 Verify: Yes 00:06:17.500 00:06:17.500 Running for 1 seconds... 00:06:17.500 00:06:17.500 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:17.500 ------------------------------------------------------------------------------------ 00:06:17.500 0,1 33568/s 61 MiB/s 0 0 00:06:17.500 0,0 33408/s 61 MiB/s 0 0 00:06:17.500 ==================================================================================== 00:06:17.500 Total 66976/s 261 MiB/s 0 0' 00:06:17.500 19:06:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:17.500 19:06:54 -- accel/accel.sh@20 -- # IFS=: 00:06:17.500 19:06:54 -- accel/accel.sh@20 -- # read -r var val 00:06:17.500 19:06:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:17.500 19:06:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.500 19:06:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.500 19:06:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.500 19:06:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.500 19:06:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.500 19:06:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.500 19:06:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.500 19:06:54 -- accel/accel.sh@42 -- # jq -r . 00:06:17.500 [2024-02-14 19:06:54.910524] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:17.500 [2024-02-14 19:06:54.910837] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59698 ] 00:06:17.761 [2024-02-14 19:06:55.051525] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.020 [2024-02-14 19:06:55.205346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.020 [2024-02-14 19:06:55.205442] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:18.020 19:06:55 -- accel/accel.sh@21 -- # val= 00:06:18.020 19:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.020 19:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:18.020 19:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:18.020 19:06:55 -- accel/accel.sh@21 -- # val= 00:06:18.020 19:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.020 19:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:18.020 19:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:18.020 19:06:55 -- accel/accel.sh@21 -- # val= 00:06:18.020 19:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.020 19:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:18.020 19:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:18.020 19:06:55 -- accel/accel.sh@21 -- # val=0x1 00:06:18.020 19:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.020 19:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:18.020 19:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:18.020 19:06:55 -- accel/accel.sh@21 -- # val= 00:06:18.020 19:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.020 19:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:18.020 19:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:18.020 19:06:55 -- accel/accel.sh@21 -- # val= 00:06:18.020 19:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.020 19:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:18.020 19:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:18.020 19:06:55 -- accel/accel.sh@21 -- # val=decompress 00:06:18.020 19:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.020 19:06:55 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:18.020 19:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:18.020 19:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:18.020 19:06:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:18.020 19:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.020 19:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:18.020 19:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:18.020 19:06:55 -- accel/accel.sh@21 -- # val= 00:06:18.020 19:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.020 19:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:18.020 19:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:18.020 19:06:55 -- accel/accel.sh@21 -- # val=software 00:06:18.020 19:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.020 19:06:55 -- accel/accel.sh@23 -- # accel_module=software 00:06:18.020 19:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:18.020 19:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:18.020 19:06:55 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:18.020 19:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.020 19:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:18.020 19:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:18.020 19:06:55 -- accel/accel.sh@21 -- # val=32 00:06:18.020 19:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.020 19:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:18.020 19:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:18.020 19:06:55 -- accel/accel.sh@21 -- # val=32 00:06:18.020 19:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.020 19:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:18.020 19:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:18.020 19:06:55 -- accel/accel.sh@21 -- # val=2 00:06:18.020 19:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.020 19:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:18.020 19:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:18.020 19:06:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:18.020 19:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.021 19:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:18.021 19:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:18.021 19:06:55 -- accel/accel.sh@21 -- # val=Yes 00:06:18.021 19:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.021 19:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:18.021 19:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:18.021 19:06:55 -- accel/accel.sh@21 -- # val= 00:06:18.021 19:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.021 19:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:18.021 19:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:18.021 19:06:55 -- accel/accel.sh@21 -- # val= 00:06:18.021 19:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.021 19:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:18.021 19:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:18.964 [2024-02-14 19:06:56.297668] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:19.233 19:06:56 -- accel/accel.sh@21 -- # val= 00:06:19.233 19:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.233 19:06:56 -- accel/accel.sh@20 -- # IFS=: 00:06:19.233 19:06:56 -- accel/accel.sh@20 -- # read -r var val 00:06:19.233 19:06:56 -- accel/accel.sh@21 -- # val= 00:06:19.233 19:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.233 19:06:56 -- accel/accel.sh@20 -- # IFS=: 00:06:19.233 19:06:56 -- accel/accel.sh@20 -- # read -r var val 00:06:19.233 19:06:56 -- accel/accel.sh@21 -- # val= 00:06:19.233 19:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.233 19:06:56 -- accel/accel.sh@20 -- # IFS=: 00:06:19.233 19:06:56 -- accel/accel.sh@20 -- # read -r var val 00:06:19.233 19:06:56 -- accel/accel.sh@21 -- # val= 00:06:19.233 19:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.233 19:06:56 -- accel/accel.sh@20 -- # IFS=: 00:06:19.233 19:06:56 -- accel/accel.sh@20 -- # read -r var val 00:06:19.233 19:06:56 -- accel/accel.sh@21 -- # val= 00:06:19.233 19:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.233 19:06:56 -- accel/accel.sh@20 -- # IFS=: 00:06:19.233 19:06:56 -- accel/accel.sh@20 -- # read -r var val 00:06:19.233 19:06:56 -- accel/accel.sh@21 -- # val= 00:06:19.233 ************************************ 00:06:19.233 END TEST accel_decomp_mthread 00:06:19.233 ************************************ 00:06:19.233 19:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.233 19:06:56 -- accel/accel.sh@20 -- # IFS=: 00:06:19.233 19:06:56 -- accel/accel.sh@20 -- # read -r var val 00:06:19.233 19:06:56 -- accel/accel.sh@21 -- # val= 00:06:19.233 19:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.233 19:06:56 -- accel/accel.sh@20 -- # IFS=: 00:06:19.233 19:06:56 -- accel/accel.sh@20 -- # read -r var val 00:06:19.233 19:06:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:19.233 19:06:56 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:19.233 19:06:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.233 00:06:19.233 real 0m3.366s 00:06:19.233 user 0m2.854s 00:06:19.233 sys 0m0.305s 00:06:19.233 19:06:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:19.233 19:06:56 -- common/autotest_common.sh@10 -- # set +x 00:06:19.233 19:06:56 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:19.233 19:06:56 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:06:19.233 19:06:56 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:19.233 19:06:56 -- common/autotest_common.sh@10 -- # set +x 00:06:19.233 ************************************ 00:06:19.234 START TEST accel_deomp_full_mthread 00:06:19.234 ************************************ 00:06:19.234 19:06:56 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:19.234 19:06:56 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.234 19:06:56 -- accel/accel.sh@17 -- # local accel_module 00:06:19.234 19:06:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:19.234 19:06:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:19.234 19:06:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.234 19:06:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.234 19:06:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.234 19:06:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.234 19:06:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.234 19:06:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.234 19:06:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.234 19:06:56 -- accel/accel.sh@42 -- # jq -r . 00:06:19.492 [2024-02-14 19:06:56.651684] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:19.492 [2024-02-14 19:06:56.651785] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59734 ] 00:06:19.492 [2024-02-14 19:06:56.784359] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.751 [2024-02-14 19:06:56.936238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.751 [2024-02-14 19:06:56.936356] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:20.687 [2024-02-14 19:06:58.050761] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:20.946 19:06:58 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:20.946 00:06:20.946 SPDK Configuration: 00:06:20.946 Core mask: 0x1 00:06:20.946 00:06:20.946 Accel Perf Configuration: 00:06:20.946 Workload Type: decompress 00:06:20.946 Transfer size: 111250 bytes 00:06:20.946 Vector count 1 00:06:20.946 Module: software 00:06:20.946 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:20.946 Queue depth: 32 00:06:20.946 Allocate depth: 32 00:06:20.946 # threads/core: 2 00:06:20.946 Run time: 1 seconds 00:06:20.946 Verify: Yes 00:06:20.946 00:06:20.946 Running for 1 seconds... 00:06:20.946 00:06:20.947 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:20.947 ------------------------------------------------------------------------------------ 00:06:20.947 0,1 2304/s 95 MiB/s 0 0 00:06:20.947 0,0 2272/s 93 MiB/s 0 0 00:06:20.947 ==================================================================================== 00:06:20.947 Total 4576/s 485 MiB/s 0 0' 00:06:20.947 19:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:20.947 19:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:20.947 19:06:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:20.947 19:06:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:20.947 19:06:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.947 19:06:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.947 19:06:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.947 19:06:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.947 19:06:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.947 19:06:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.947 19:06:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.947 19:06:58 -- accel/accel.sh@42 -- # jq -r . 00:06:20.947 [2024-02-14 19:06:58.348293] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:20.947 [2024-02-14 19:06:58.348391] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59759 ] 00:06:21.205 [2024-02-14 19:06:58.483241] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.464 [2024-02-14 19:06:58.633933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.464 [2024-02-14 19:06:58.634036] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:21.464 19:06:58 -- accel/accel.sh@21 -- # val= 00:06:21.464 19:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.464 19:06:58 -- accel/accel.sh@21 -- # val= 00:06:21.464 19:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.464 19:06:58 -- accel/accel.sh@21 -- # val= 00:06:21.464 19:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.464 19:06:58 -- accel/accel.sh@21 -- # val=0x1 00:06:21.464 19:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.464 19:06:58 -- accel/accel.sh@21 -- # val= 00:06:21.464 19:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.464 19:06:58 -- accel/accel.sh@21 -- # val= 00:06:21.464 19:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.464 19:06:58 -- accel/accel.sh@21 -- # val=decompress 00:06:21.464 19:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.464 19:06:58 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.464 19:06:58 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:21.464 19:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.464 19:06:58 -- accel/accel.sh@21 -- # val= 00:06:21.464 19:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.464 19:06:58 -- accel/accel.sh@21 -- # val=software 00:06:21.464 19:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.464 19:06:58 -- accel/accel.sh@23 -- # accel_module=software 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.464 19:06:58 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:21.464 19:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.464 19:06:58 -- accel/accel.sh@21 -- # val=32 00:06:21.464 19:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.464 19:06:58 -- accel/accel.sh@21 -- # val=32 00:06:21.464 19:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.464 19:06:58 -- accel/accel.sh@21 -- # val=2 00:06:21.464 19:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.464 19:06:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:21.464 19:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.464 19:06:58 -- accel/accel.sh@21 -- # val=Yes 00:06:21.464 19:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.464 19:06:58 -- accel/accel.sh@21 -- # val= 00:06:21.464 19:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:21.464 19:06:58 -- accel/accel.sh@21 -- # val= 00:06:21.464 19:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:21.464 19:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:22.400 [2024-02-14 19:06:59.744983] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:22.660 19:07:00 -- accel/accel.sh@21 -- # val= 00:06:22.660 19:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.660 19:07:00 -- accel/accel.sh@20 -- # IFS=: 00:06:22.660 19:07:00 -- accel/accel.sh@20 -- # read -r var val 00:06:22.660 19:07:00 -- accel/accel.sh@21 -- # val= 00:06:22.660 19:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.660 19:07:00 -- accel/accel.sh@20 -- # IFS=: 00:06:22.660 19:07:00 -- accel/accel.sh@20 -- # read -r var val 00:06:22.660 19:07:00 -- accel/accel.sh@21 -- # val= 00:06:22.660 19:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.660 19:07:00 -- accel/accel.sh@20 -- # IFS=: 00:06:22.660 19:07:00 -- accel/accel.sh@20 -- # read -r var val 00:06:22.660 19:07:00 -- accel/accel.sh@21 -- # val= 00:06:22.660 19:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.660 19:07:00 -- accel/accel.sh@20 -- # IFS=: 00:06:22.660 19:07:00 -- accel/accel.sh@20 -- # read -r var val 00:06:22.660 19:07:00 -- accel/accel.sh@21 -- # val= 00:06:22.660 ************************************ 00:06:22.660 END TEST accel_deomp_full_mthread 00:06:22.660 ************************************ 00:06:22.660 19:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.660 19:07:00 -- accel/accel.sh@20 -- # IFS=: 00:06:22.660 19:07:00 -- accel/accel.sh@20 -- # read -r var val 00:06:22.660 19:07:00 -- accel/accel.sh@21 -- # val= 00:06:22.660 19:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.660 19:07:00 -- accel/accel.sh@20 -- # IFS=: 00:06:22.660 19:07:00 -- accel/accel.sh@20 -- # read -r var val 00:06:22.660 19:07:00 -- accel/accel.sh@21 -- # val= 00:06:22.660 19:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.660 19:07:00 -- accel/accel.sh@20 -- # IFS=: 00:06:22.660 19:07:00 -- accel/accel.sh@20 -- # read -r var val 00:06:22.660 19:07:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:22.660 19:07:00 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:22.660 19:07:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.660 00:06:22.660 real 0m3.399s 00:06:22.660 user 0m2.894s 00:06:22.660 sys 0m0.299s 00:06:22.660 19:07:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:22.660 19:07:00 -- common/autotest_common.sh@10 -- # set +x 00:06:22.660 19:07:00 -- accel/accel.sh@116 -- # [[ n == y ]] 00:06:22.660 19:07:00 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:22.660 19:07:00 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:06:22.660 19:07:00 -- accel/accel.sh@129 -- # build_accel_config 00:06:22.660 19:07:00 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:22.660 19:07:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.660 19:07:00 -- common/autotest_common.sh@10 -- # set +x 00:06:22.660 19:07:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.660 19:07:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.660 19:07:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.660 19:07:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.660 19:07:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.660 19:07:00 -- accel/accel.sh@42 -- # jq -r . 00:06:22.919 ************************************ 00:06:22.919 START TEST accel_dif_functional_tests 00:06:22.919 ************************************ 00:06:22.919 19:07:00 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:22.919 [2024-02-14 19:07:00.129142] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:22.919 [2024-02-14 19:07:00.129256] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59789 ] 00:06:22.919 [2024-02-14 19:07:00.262199] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.178 [2024-02-14 19:07:00.415696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.178 [2024-02-14 19:07:00.415835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.178 [2024-02-14 19:07:00.415823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.178 [2024-02-14 19:07:00.416322] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:23.178 00:06:23.178 00:06:23.178 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.178 http://cunit.sourceforge.net/ 00:06:23.178 00:06:23.178 00:06:23.178 Suite: accel_dif 00:06:23.178 Test: verify: DIF generated, GUARD check ...passed 00:06:23.178 Test: verify: DIF generated, APPTAG check ...passed 00:06:23.178 Test: verify: DIF generated, REFTAG check ...passed 00:06:23.178 Test: verify: DIF not generated, GUARD check ...passed 00:06:23.178 Test: verify: DIF not generated, APPTAG check ...[2024-02-14 19:07:00.545333] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:23.178 [2024-02-14 19:07:00.545427] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:23.178 [2024-02-14 19:07:00.545485] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:23.178 [2024-02-14 19:07:00.545544] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:23.178 passed 00:06:23.178 Test: verify: DIF not generated, REFTAG check ...passed 00:06:23.178 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:23.178 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:06:23.178 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:23.178 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-02-14 19:07:00.545576] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:23.178 [2024-02-14 19:07:00.545606] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:23.178 [2024-02-14 19:07:00.545672] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:23.178 passed 00:06:23.178 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:23.178 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:06:23.178 Test: generate copy: DIF generated, GUARD check ...[2024-02-14 19:07:00.545837] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:23.178 passed 00:06:23.178 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:23.178 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:23.178 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:23.178 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:23.178 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:23.178 Test: generate copy: iovecs-len validate ...passed 00:06:23.178 Test: generate copy: buffer alignment validate ...passed 00:06:23.178 00:06:23.178 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.178 suites 1 1 n/a 0 0 00:06:23.178 tests 20 20 20 0 0 00:06:23.178 asserts 204 204 204 0 n/a 00:06:23.178 00:06:23.178 Elapsed time = 0.002 seconds 00:06:23.178 [2024-02-14 19:07:00.546135] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:23.178 [2024-02-14 19:07:00.546419] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:23.745 00:06:23.745 real 0m0.809s 00:06:23.745 user 0m1.137s 00:06:23.745 sys 0m0.206s 00:06:23.745 19:07:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.745 19:07:00 -- common/autotest_common.sh@10 -- # set +x 00:06:23.745 ************************************ 00:06:23.745 END TEST accel_dif_functional_tests 00:06:23.745 ************************************ 00:06:23.745 00:06:23.745 real 1m15.028s 00:06:23.745 user 1m17.319s 00:06:23.745 sys 0m8.976s 00:06:23.745 19:07:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.745 19:07:00 -- common/autotest_common.sh@10 -- # set +x 00:06:23.745 ************************************ 00:06:23.745 END TEST accel 00:06:23.745 ************************************ 00:06:23.745 19:07:00 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:23.745 19:07:00 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:23.745 19:07:00 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:23.745 19:07:00 -- common/autotest_common.sh@10 -- # set +x 00:06:23.745 ************************************ 00:06:23.745 START TEST accel_rpc 00:06:23.745 ************************************ 00:06:23.745 19:07:00 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:23.745 * Looking for test storage... 00:06:23.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:23.745 19:07:01 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:23.745 19:07:01 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=59858 00:06:23.745 19:07:01 -- accel/accel_rpc.sh@15 -- # waitforlisten 59858 00:06:23.745 19:07:01 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:23.745 19:07:01 -- common/autotest_common.sh@817 -- # '[' -z 59858 ']' 00:06:23.745 19:07:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.745 19:07:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:23.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.745 19:07:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.745 19:07:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:23.745 19:07:01 -- common/autotest_common.sh@10 -- # set +x 00:06:23.745 [2024-02-14 19:07:01.119494] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:23.745 [2024-02-14 19:07:01.119621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59858 ] 00:06:24.004 [2024-02-14 19:07:01.255200] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.004 [2024-02-14 19:07:01.406471] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:24.004 [2024-02-14 19:07:01.406680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.939 19:07:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:24.939 19:07:02 -- common/autotest_common.sh@850 -- # return 0 00:06:24.939 19:07:02 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:24.939 19:07:02 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:24.939 19:07:02 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:24.939 19:07:02 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:24.939 19:07:02 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:24.939 19:07:02 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:24.939 19:07:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:24.939 19:07:02 -- common/autotest_common.sh@10 -- # set +x 00:06:24.939 ************************************ 00:06:24.939 START TEST accel_assign_opcode 00:06:24.939 ************************************ 00:06:24.939 19:07:02 -- common/autotest_common.sh@1102 -- # accel_assign_opcode_test_suite 00:06:24.939 19:07:02 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:24.939 19:07:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:24.939 19:07:02 -- common/autotest_common.sh@10 -- # set +x 00:06:24.939 [2024-02-14 19:07:02.079254] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:24.939 19:07:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:24.939 19:07:02 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:24.939 19:07:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:24.939 19:07:02 -- common/autotest_common.sh@10 -- # set +x 00:06:24.939 [2024-02-14 19:07:02.087212] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:24.939 19:07:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:24.939 19:07:02 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:24.939 19:07:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:24.939 19:07:02 -- common/autotest_common.sh@10 -- # set +x 00:06:25.198 19:07:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:25.198 19:07:02 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:25.198 19:07:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:25.198 19:07:02 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:25.198 19:07:02 -- common/autotest_common.sh@10 -- # set +x 00:06:25.198 19:07:02 -- accel/accel_rpc.sh@42 -- # grep software 00:06:25.198 19:07:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:25.198 software 00:06:25.198 00:06:25.198 real 0m0.374s 00:06:25.198 user 0m0.057s 00:06:25.198 sys 0m0.011s 00:06:25.198 19:07:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.198 19:07:02 -- common/autotest_common.sh@10 -- # set +x 00:06:25.198 ************************************ 00:06:25.198 END TEST accel_assign_opcode 00:06:25.198 ************************************ 00:06:25.198 19:07:02 -- accel/accel_rpc.sh@55 -- # killprocess 59858 00:06:25.198 19:07:02 -- common/autotest_common.sh@924 -- # '[' -z 59858 ']' 00:06:25.198 19:07:02 -- common/autotest_common.sh@928 -- # kill -0 59858 00:06:25.198 19:07:02 -- common/autotest_common.sh@929 -- # uname 00:06:25.198 19:07:02 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:06:25.198 19:07:02 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 59858 00:06:25.198 19:07:02 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:06:25.198 19:07:02 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:06:25.198 killing process with pid 59858 00:06:25.198 19:07:02 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 59858' 00:06:25.198 19:07:02 -- common/autotest_common.sh@943 -- # kill 59858 00:06:25.198 19:07:02 -- common/autotest_common.sh@948 -- # wait 59858 00:06:25.764 00:06:25.764 real 0m2.115s 00:06:25.764 user 0m2.107s 00:06:25.764 sys 0m0.522s 00:06:25.764 19:07:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.764 19:07:03 -- common/autotest_common.sh@10 -- # set +x 00:06:25.764 ************************************ 00:06:25.764 END TEST accel_rpc 00:06:25.764 ************************************ 00:06:25.764 19:07:03 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:25.764 19:07:03 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:25.764 19:07:03 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:25.764 19:07:03 -- common/autotest_common.sh@10 -- # set +x 00:06:25.764 ************************************ 00:06:25.764 START TEST app_cmdline 00:06:25.764 ************************************ 00:06:25.764 19:07:03 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:26.022 * Looking for test storage... 00:06:26.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.022 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:26.022 19:07:03 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:26.022 19:07:03 -- app/cmdline.sh@17 -- # spdk_tgt_pid=59974 00:06:26.022 19:07:03 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:26.022 19:07:03 -- app/cmdline.sh@18 -- # waitforlisten 59974 00:06:26.022 19:07:03 -- common/autotest_common.sh@817 -- # '[' -z 59974 ']' 00:06:26.022 19:07:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.022 19:07:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:26.022 19:07:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.022 19:07:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:26.022 19:07:03 -- common/autotest_common.sh@10 -- # set +x 00:06:26.022 [2024-02-14 19:07:03.292178] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:26.022 [2024-02-14 19:07:03.292288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59974 ] 00:06:26.022 [2024-02-14 19:07:03.426919] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.281 [2024-02-14 19:07:03.573045] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:26.281 [2024-02-14 19:07:03.573232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.848 19:07:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:26.848 19:07:04 -- common/autotest_common.sh@850 -- # return 0 00:06:26.848 19:07:04 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:27.106 { 00:06:27.106 "fields": { 00:06:27.106 "commit": "aa824ae66", 00:06:27.106 "major": 24, 00:06:27.106 "minor": 5, 00:06:27.106 "patch": 0, 00:06:27.106 "suffix": "-pre" 00:06:27.106 }, 00:06:27.107 "version": "SPDK v24.05-pre git sha1 aa824ae66" 00:06:27.107 } 00:06:27.107 19:07:04 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:27.107 19:07:04 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:27.107 19:07:04 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:27.107 19:07:04 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:27.107 19:07:04 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:27.107 19:07:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.107 19:07:04 -- common/autotest_common.sh@10 -- # set +x 00:06:27.107 19:07:04 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:27.107 19:07:04 -- app/cmdline.sh@26 -- # sort 00:06:27.107 19:07:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:27.365 19:07:04 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:27.365 19:07:04 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:27.365 19:07:04 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.365 19:07:04 -- common/autotest_common.sh@638 -- # local es=0 00:06:27.365 19:07:04 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.365 19:07:04 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:27.365 19:07:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:27.365 19:07:04 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:27.365 19:07:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:27.365 19:07:04 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:27.365 19:07:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:27.365 19:07:04 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:27.365 19:07:04 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:27.365 19:07:04 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.624 2024/02/14 19:07:04 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:06:27.624 request: 00:06:27.624 { 00:06:27.624 "method": "env_dpdk_get_mem_stats", 00:06:27.624 "params": {} 00:06:27.624 } 00:06:27.624 Got JSON-RPC error response 00:06:27.624 GoRPCClient: error on JSON-RPC call 00:06:27.624 19:07:04 -- common/autotest_common.sh@641 -- # es=1 00:06:27.624 19:07:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:27.624 19:07:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:27.624 19:07:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:27.624 19:07:04 -- app/cmdline.sh@1 -- # killprocess 59974 00:06:27.624 19:07:04 -- common/autotest_common.sh@924 -- # '[' -z 59974 ']' 00:06:27.624 19:07:04 -- common/autotest_common.sh@928 -- # kill -0 59974 00:06:27.624 19:07:04 -- common/autotest_common.sh@929 -- # uname 00:06:27.624 19:07:04 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:06:27.624 19:07:04 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 59974 00:06:27.624 killing process with pid 59974 00:06:27.624 19:07:04 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:06:27.624 19:07:04 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:06:27.624 19:07:04 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 59974' 00:06:27.624 19:07:04 -- common/autotest_common.sh@943 -- # kill 59974 00:06:27.624 19:07:04 -- common/autotest_common.sh@948 -- # wait 59974 00:06:28.192 00:06:28.192 real 0m2.334s 00:06:28.192 user 0m2.752s 00:06:28.192 sys 0m0.594s 00:06:28.192 19:07:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.192 ************************************ 00:06:28.192 19:07:05 -- common/autotest_common.sh@10 -- # set +x 00:06:28.192 END TEST app_cmdline 00:06:28.192 ************************************ 00:06:28.192 19:07:05 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:28.192 19:07:05 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:28.192 19:07:05 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:28.192 19:07:05 -- common/autotest_common.sh@10 -- # set +x 00:06:28.192 ************************************ 00:06:28.192 START TEST version 00:06:28.192 ************************************ 00:06:28.192 19:07:05 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:28.192 * Looking for test storage... 00:06:28.451 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:28.451 19:07:05 -- app/version.sh@17 -- # get_header_version major 00:06:28.451 19:07:05 -- app/version.sh@14 -- # cut -f2 00:06:28.451 19:07:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:28.451 19:07:05 -- app/version.sh@14 -- # tr -d '"' 00:06:28.451 19:07:05 -- app/version.sh@17 -- # major=24 00:06:28.451 19:07:05 -- app/version.sh@18 -- # get_header_version minor 00:06:28.451 19:07:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:28.451 19:07:05 -- app/version.sh@14 -- # tr -d '"' 00:06:28.451 19:07:05 -- app/version.sh@14 -- # cut -f2 00:06:28.451 19:07:05 -- app/version.sh@18 -- # minor=5 00:06:28.451 19:07:05 -- app/version.sh@19 -- # get_header_version patch 00:06:28.451 19:07:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:28.451 19:07:05 -- app/version.sh@14 -- # cut -f2 00:06:28.451 19:07:05 -- app/version.sh@14 -- # tr -d '"' 00:06:28.451 19:07:05 -- app/version.sh@19 -- # patch=0 00:06:28.451 19:07:05 -- app/version.sh@20 -- # get_header_version suffix 00:06:28.451 19:07:05 -- app/version.sh@14 -- # cut -f2 00:06:28.451 19:07:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:28.451 19:07:05 -- app/version.sh@14 -- # tr -d '"' 00:06:28.451 19:07:05 -- app/version.sh@20 -- # suffix=-pre 00:06:28.451 19:07:05 -- app/version.sh@22 -- # version=24.5 00:06:28.451 19:07:05 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:28.451 19:07:05 -- app/version.sh@28 -- # version=24.5rc0 00:06:28.451 19:07:05 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:28.451 19:07:05 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:28.451 19:07:05 -- app/version.sh@30 -- # py_version=24.5rc0 00:06:28.451 19:07:05 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:28.451 00:06:28.451 real 0m0.151s 00:06:28.451 user 0m0.074s 00:06:28.451 sys 0m0.108s 00:06:28.451 19:07:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.451 19:07:05 -- common/autotest_common.sh@10 -- # set +x 00:06:28.451 ************************************ 00:06:28.451 END TEST version 00:06:28.451 ************************************ 00:06:28.451 19:07:05 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:06:28.451 19:07:05 -- spdk/autotest.sh@204 -- # uname -s 00:06:28.451 19:07:05 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:06:28.451 19:07:05 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:28.451 19:07:05 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:28.451 19:07:05 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:06:28.451 19:07:05 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:06:28.451 19:07:05 -- spdk/autotest.sh@268 -- # timing_exit lib 00:06:28.451 19:07:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:28.451 19:07:05 -- common/autotest_common.sh@10 -- # set +x 00:06:28.451 19:07:05 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:28.451 19:07:05 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:06:28.451 19:07:05 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:06:28.451 19:07:05 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:06:28.451 19:07:05 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:06:28.451 19:07:05 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:06:28.451 19:07:05 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:28.451 19:07:05 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:06:28.451 19:07:05 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:28.451 19:07:05 -- common/autotest_common.sh@10 -- # set +x 00:06:28.451 ************************************ 00:06:28.451 START TEST nvmf_tcp 00:06:28.451 ************************************ 00:06:28.451 19:07:05 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:28.451 * Looking for test storage... 00:06:28.451 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:28.451 19:07:05 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:28.451 19:07:05 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:28.451 19:07:05 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:28.451 19:07:05 -- nvmf/common.sh@7 -- # uname -s 00:06:28.451 19:07:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:28.451 19:07:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:28.451 19:07:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:28.451 19:07:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:28.451 19:07:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:28.451 19:07:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:28.452 19:07:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:28.452 19:07:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:28.452 19:07:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:28.452 19:07:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:28.711 19:07:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:06:28.711 19:07:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:06:28.711 19:07:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:28.711 19:07:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:28.711 19:07:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:28.711 19:07:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:28.711 19:07:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.711 19:07:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.711 19:07:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.711 19:07:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.711 19:07:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.711 19:07:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.711 19:07:05 -- paths/export.sh@5 -- # export PATH 00:06:28.711 19:07:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.711 19:07:05 -- nvmf/common.sh@46 -- # : 0 00:06:28.711 19:07:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:28.711 19:07:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:28.711 19:07:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:28.711 19:07:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:28.711 19:07:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:28.711 19:07:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:28.711 19:07:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:28.711 19:07:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:28.711 19:07:05 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:28.711 19:07:05 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:28.711 19:07:05 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:28.711 19:07:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:28.711 19:07:05 -- common/autotest_common.sh@10 -- # set +x 00:06:28.711 19:07:05 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:28.711 19:07:05 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:28.711 19:07:05 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:06:28.711 19:07:05 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:28.711 19:07:05 -- common/autotest_common.sh@10 -- # set +x 00:06:28.711 ************************************ 00:06:28.711 START TEST nvmf_example 00:06:28.711 ************************************ 00:06:28.711 19:07:05 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:28.711 * Looking for test storage... 00:06:28.711 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:28.711 19:07:05 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:28.711 19:07:05 -- nvmf/common.sh@7 -- # uname -s 00:06:28.711 19:07:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:28.711 19:07:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:28.711 19:07:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:28.711 19:07:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:28.711 19:07:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:28.711 19:07:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:28.711 19:07:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:28.711 19:07:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:28.711 19:07:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:28.711 19:07:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:28.711 19:07:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:06:28.711 19:07:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:06:28.711 19:07:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:28.711 19:07:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:28.711 19:07:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:28.711 19:07:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:28.712 19:07:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.712 19:07:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.712 19:07:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.712 19:07:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.712 19:07:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.712 19:07:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.712 19:07:05 -- paths/export.sh@5 -- # export PATH 00:06:28.712 19:07:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.712 19:07:05 -- nvmf/common.sh@46 -- # : 0 00:06:28.712 19:07:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:28.712 19:07:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:28.712 19:07:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:28.712 19:07:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:28.712 19:07:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:28.712 19:07:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:28.712 19:07:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:28.712 19:07:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:28.712 19:07:05 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:28.712 19:07:05 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:28.712 19:07:05 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:28.712 19:07:05 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:28.712 19:07:05 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:28.712 19:07:05 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:28.712 19:07:05 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:28.712 19:07:05 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:28.712 19:07:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:28.712 19:07:05 -- common/autotest_common.sh@10 -- # set +x 00:06:28.712 19:07:05 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:28.712 19:07:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:28.712 19:07:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:28.712 19:07:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:28.712 19:07:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:28.712 19:07:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:28.712 19:07:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:28.712 19:07:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:28.712 19:07:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:28.712 19:07:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:06:28.712 19:07:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:06:28.712 19:07:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:06:28.712 19:07:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:06:28.712 19:07:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:06:28.712 19:07:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:06:28.712 19:07:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:28.712 19:07:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:28.712 19:07:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:28.712 19:07:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:06:28.712 19:07:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:28.712 19:07:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:28.712 19:07:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:28.712 19:07:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:28.712 19:07:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:28.712 19:07:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:28.712 19:07:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:28.712 19:07:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:28.712 19:07:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:06:28.712 Cannot find device "nvmf_init_br" 00:06:28.712 19:07:06 -- nvmf/common.sh@153 -- # true 00:06:28.712 19:07:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:06:28.712 Cannot find device "nvmf_tgt_br" 00:06:28.712 19:07:06 -- nvmf/common.sh@154 -- # true 00:06:28.712 19:07:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:06:28.712 Cannot find device "nvmf_tgt_br2" 00:06:28.712 19:07:06 -- nvmf/common.sh@155 -- # true 00:06:28.712 19:07:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:06:28.712 Cannot find device "nvmf_init_br" 00:06:28.712 19:07:06 -- nvmf/common.sh@156 -- # true 00:06:28.712 19:07:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:06:28.712 Cannot find device "nvmf_tgt_br" 00:06:28.712 19:07:06 -- nvmf/common.sh@157 -- # true 00:06:28.712 19:07:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:06:28.712 Cannot find device "nvmf_tgt_br2" 00:06:28.712 19:07:06 -- nvmf/common.sh@158 -- # true 00:06:28.712 19:07:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:06:28.712 Cannot find device "nvmf_br" 00:06:28.712 19:07:06 -- nvmf/common.sh@159 -- # true 00:06:28.712 19:07:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:06:28.712 Cannot find device "nvmf_init_if" 00:06:28.712 19:07:06 -- nvmf/common.sh@160 -- # true 00:06:28.712 19:07:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:28.712 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:28.712 19:07:06 -- nvmf/common.sh@161 -- # true 00:06:28.712 19:07:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:28.712 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:28.712 19:07:06 -- nvmf/common.sh@162 -- # true 00:06:28.712 19:07:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:06:28.712 19:07:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:28.971 19:07:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:28.971 19:07:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:28.971 19:07:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:28.971 19:07:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:28.971 19:07:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:28.972 19:07:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:28.972 19:07:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:28.972 19:07:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:06:28.972 19:07:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:06:28.972 19:07:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:06:28.972 19:07:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:06:28.972 19:07:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:28.972 19:07:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:28.972 19:07:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:28.972 19:07:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:06:28.972 19:07:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:06:28.972 19:07:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:06:28.972 19:07:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:28.972 19:07:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:28.972 19:07:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:28.972 19:07:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:28.972 19:07:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:06:28.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:28.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:06:28.972 00:06:28.972 --- 10.0.0.2 ping statistics --- 00:06:28.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.972 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:06:28.972 19:07:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:06:28.972 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:28.972 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.152 ms 00:06:28.972 00:06:28.972 --- 10.0.0.3 ping statistics --- 00:06:28.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.972 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:06:28.972 19:07:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:28.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:28.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:06:28.972 00:06:28.972 --- 10.0.0.1 ping statistics --- 00:06:28.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.972 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:06:28.972 19:07:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:28.972 19:07:06 -- nvmf/common.sh@421 -- # return 0 00:06:28.972 19:07:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:28.972 19:07:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:28.972 19:07:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:28.972 19:07:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:28.972 19:07:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:28.972 19:07:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:28.972 19:07:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:29.231 19:07:06 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:29.231 19:07:06 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:29.231 19:07:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:29.231 19:07:06 -- common/autotest_common.sh@10 -- # set +x 00:06:29.231 19:07:06 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:29.231 19:07:06 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:29.231 19:07:06 -- target/nvmf_example.sh@34 -- # nvmfpid=60328 00:06:29.231 19:07:06 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:29.231 19:07:06 -- target/nvmf_example.sh@36 -- # waitforlisten 60328 00:06:29.231 19:07:06 -- common/autotest_common.sh@817 -- # '[' -z 60328 ']' 00:06:29.231 19:07:06 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:29.231 19:07:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.231 19:07:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:29.231 19:07:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.231 19:07:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:29.231 19:07:06 -- common/autotest_common.sh@10 -- # set +x 00:06:30.200 19:07:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:30.201 19:07:07 -- common/autotest_common.sh@850 -- # return 0 00:06:30.201 19:07:07 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:30.201 19:07:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:30.201 19:07:07 -- common/autotest_common.sh@10 -- # set +x 00:06:30.201 19:07:07 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:30.201 19:07:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.201 19:07:07 -- common/autotest_common.sh@10 -- # set +x 00:06:30.201 19:07:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.201 19:07:07 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:30.201 19:07:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.201 19:07:07 -- common/autotest_common.sh@10 -- # set +x 00:06:30.201 19:07:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.201 19:07:07 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:30.201 19:07:07 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:30.201 19:07:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.201 19:07:07 -- common/autotest_common.sh@10 -- # set +x 00:06:30.201 19:07:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.201 19:07:07 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:30.201 19:07:07 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:30.201 19:07:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.201 19:07:07 -- common/autotest_common.sh@10 -- # set +x 00:06:30.201 19:07:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.201 19:07:07 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:30.201 19:07:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.201 19:07:07 -- common/autotest_common.sh@10 -- # set +x 00:06:30.201 19:07:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.201 19:07:07 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:06:30.201 19:07:07 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:42.408 Initializing NVMe Controllers 00:06:42.408 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:42.408 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:42.408 Initialization complete. Launching workers. 00:06:42.408 ======================================================== 00:06:42.408 Latency(us) 00:06:42.408 Device Information : IOPS MiB/s Average min max 00:06:42.408 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14764.44 57.67 4334.32 822.00 24891.86 00:06:42.408 ======================================================== 00:06:42.408 Total : 14764.44 57.67 4334.32 822.00 24891.86 00:06:42.408 00:06:42.408 19:07:17 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:42.408 19:07:17 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:42.408 19:07:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:06:42.408 19:07:17 -- nvmf/common.sh@116 -- # sync 00:06:42.408 19:07:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:06:42.408 19:07:17 -- nvmf/common.sh@119 -- # set +e 00:06:42.408 19:07:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:06:42.408 19:07:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:06:42.408 rmmod nvme_tcp 00:06:42.408 rmmod nvme_fabrics 00:06:42.408 rmmod nvme_keyring 00:06:42.408 19:07:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:06:42.408 19:07:17 -- nvmf/common.sh@123 -- # set -e 00:06:42.408 19:07:17 -- nvmf/common.sh@124 -- # return 0 00:06:42.408 19:07:17 -- nvmf/common.sh@477 -- # '[' -n 60328 ']' 00:06:42.408 19:07:17 -- nvmf/common.sh@478 -- # killprocess 60328 00:06:42.408 19:07:17 -- common/autotest_common.sh@924 -- # '[' -z 60328 ']' 00:06:42.408 19:07:17 -- common/autotest_common.sh@928 -- # kill -0 60328 00:06:42.408 19:07:17 -- common/autotest_common.sh@929 -- # uname 00:06:42.408 19:07:17 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:06:42.408 19:07:17 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 60328 00:06:42.408 19:07:17 -- common/autotest_common.sh@930 -- # process_name=nvmf 00:06:42.408 19:07:17 -- common/autotest_common.sh@934 -- # '[' nvmf = sudo ']' 00:06:42.408 killing process with pid 60328 00:06:42.408 19:07:17 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 60328' 00:06:42.408 19:07:17 -- common/autotest_common.sh@943 -- # kill 60328 00:06:42.408 19:07:17 -- common/autotest_common.sh@948 -- # wait 60328 00:06:42.408 nvmf threads initialize successfully 00:06:42.408 bdev subsystem init successfully 00:06:42.408 created a nvmf target service 00:06:42.408 create targets's poll groups done 00:06:42.408 all subsystems of target started 00:06:42.408 nvmf target is running 00:06:42.408 all subsystems of target stopped 00:06:42.408 destroy targets's poll groups done 00:06:42.408 destroyed the nvmf target service 00:06:42.408 bdev subsystem finish successfully 00:06:42.408 nvmf threads destroy successfully 00:06:42.408 19:07:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:06:42.408 19:07:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:06:42.408 19:07:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:06:42.408 19:07:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:42.408 19:07:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:06:42.408 19:07:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.408 19:07:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:42.408 19:07:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.408 19:07:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:06:42.408 19:07:18 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:42.408 19:07:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:42.408 19:07:18 -- common/autotest_common.sh@10 -- # set +x 00:06:42.408 00:06:42.408 real 0m12.435s 00:06:42.408 user 0m44.511s 00:06:42.408 sys 0m2.064s 00:06:42.408 19:07:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.408 ************************************ 00:06:42.408 END TEST nvmf_example 00:06:42.408 19:07:18 -- common/autotest_common.sh@10 -- # set +x 00:06:42.408 ************************************ 00:06:42.408 19:07:18 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:42.408 19:07:18 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:06:42.408 19:07:18 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:42.408 19:07:18 -- common/autotest_common.sh@10 -- # set +x 00:06:42.408 ************************************ 00:06:42.408 START TEST nvmf_filesystem 00:06:42.408 ************************************ 00:06:42.408 19:07:18 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:42.408 * Looking for test storage... 00:06:42.408 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:42.408 19:07:18 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:06:42.408 19:07:18 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:42.408 19:07:18 -- common/autotest_common.sh@34 -- # set -e 00:06:42.408 19:07:18 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:42.408 19:07:18 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:42.408 19:07:18 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:42.408 19:07:18 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:42.408 19:07:18 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:42.408 19:07:18 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:42.408 19:07:18 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:42.409 19:07:18 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:42.409 19:07:18 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:42.409 19:07:18 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:42.409 19:07:18 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:42.409 19:07:18 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:42.409 19:07:18 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:42.409 19:07:18 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:42.409 19:07:18 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:42.409 19:07:18 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:42.409 19:07:18 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:42.409 19:07:18 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:42.409 19:07:18 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:42.409 19:07:18 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:42.409 19:07:18 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:42.409 19:07:18 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:42.409 19:07:18 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:42.409 19:07:18 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:42.409 19:07:18 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:42.409 19:07:18 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:42.409 19:07:18 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:42.409 19:07:18 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:42.409 19:07:18 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:42.409 19:07:18 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:42.409 19:07:18 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:42.409 19:07:18 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:42.409 19:07:18 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:42.409 19:07:18 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:42.409 19:07:18 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:42.409 19:07:18 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:42.409 19:07:18 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:42.409 19:07:18 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:42.409 19:07:18 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:42.409 19:07:18 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:42.409 19:07:18 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:42.409 19:07:18 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:42.409 19:07:18 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:42.409 19:07:18 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:42.409 19:07:18 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:42.409 19:07:18 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:42.409 19:07:18 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:42.409 19:07:18 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:42.409 19:07:18 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:42.409 19:07:18 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:42.409 19:07:18 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:42.409 19:07:18 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:42.409 19:07:18 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:42.409 19:07:18 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:42.409 19:07:18 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:06:42.409 19:07:18 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:42.409 19:07:18 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:06:42.409 19:07:18 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:06:42.409 19:07:18 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:06:42.409 19:07:18 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:06:42.409 19:07:18 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:06:42.409 19:07:18 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:06:42.409 19:07:18 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:06:42.409 19:07:18 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:06:42.409 19:07:18 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:06:42.409 19:07:18 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:06:42.409 19:07:18 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:06:42.409 19:07:18 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:06:42.409 19:07:18 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:06:42.409 19:07:18 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:42.409 19:07:18 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:06:42.409 19:07:18 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:06:42.409 19:07:18 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:06:42.409 19:07:18 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:06:42.409 19:07:18 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:06:42.409 19:07:18 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:06:42.409 19:07:18 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:06:42.409 19:07:18 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:06:42.409 19:07:18 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:06:42.409 19:07:18 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:06:42.409 19:07:18 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:42.409 19:07:18 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:06:42.409 19:07:18 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:06:42.409 19:07:18 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:42.409 19:07:18 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:42.409 19:07:18 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:06:42.409 19:07:18 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:06:42.409 19:07:18 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:06:42.409 19:07:18 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:06:42.409 19:07:18 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:06:42.409 19:07:18 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:06:42.409 19:07:18 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:42.409 19:07:18 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:42.409 19:07:18 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:42.409 19:07:18 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:42.409 19:07:18 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:42.409 19:07:18 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:42.409 19:07:18 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:06:42.409 19:07:18 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:42.409 #define SPDK_CONFIG_H 00:06:42.409 #define SPDK_CONFIG_APPS 1 00:06:42.409 #define SPDK_CONFIG_ARCH native 00:06:42.409 #undef SPDK_CONFIG_ASAN 00:06:42.409 #define SPDK_CONFIG_AVAHI 1 00:06:42.409 #undef SPDK_CONFIG_CET 00:06:42.409 #define SPDK_CONFIG_COVERAGE 1 00:06:42.409 #define SPDK_CONFIG_CROSS_PREFIX 00:06:42.409 #undef SPDK_CONFIG_CRYPTO 00:06:42.409 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:42.409 #undef SPDK_CONFIG_CUSTOMOCF 00:06:42.409 #undef SPDK_CONFIG_DAOS 00:06:42.409 #define SPDK_CONFIG_DAOS_DIR 00:06:42.409 #define SPDK_CONFIG_DEBUG 1 00:06:42.409 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:42.409 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:42.409 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:42.409 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:42.409 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:42.409 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:42.409 #define SPDK_CONFIG_EXAMPLES 1 00:06:42.409 #undef SPDK_CONFIG_FC 00:06:42.409 #define SPDK_CONFIG_FC_PATH 00:06:42.409 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:42.409 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:42.409 #undef SPDK_CONFIG_FUSE 00:06:42.409 #undef SPDK_CONFIG_FUZZER 00:06:42.409 #define SPDK_CONFIG_FUZZER_LIB 00:06:42.409 #define SPDK_CONFIG_GOLANG 1 00:06:42.409 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:42.409 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:42.409 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:42.409 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:42.409 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:42.409 #define SPDK_CONFIG_IDXD 1 00:06:42.409 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:42.409 #undef SPDK_CONFIG_IPSEC_MB 00:06:42.409 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:42.409 #define SPDK_CONFIG_ISAL 1 00:06:42.409 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:42.409 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:42.409 #define SPDK_CONFIG_LIBDIR 00:06:42.409 #undef SPDK_CONFIG_LTO 00:06:42.409 #define SPDK_CONFIG_MAX_LCORES 00:06:42.409 #define SPDK_CONFIG_NVME_CUSE 1 00:06:42.409 #undef SPDK_CONFIG_OCF 00:06:42.409 #define SPDK_CONFIG_OCF_PATH 00:06:42.409 #define SPDK_CONFIG_OPENSSL_PATH 00:06:42.409 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:42.409 #undef SPDK_CONFIG_PGO_USE 00:06:42.409 #define SPDK_CONFIG_PREFIX /usr/local 00:06:42.409 #undef SPDK_CONFIG_RAID5F 00:06:42.409 #undef SPDK_CONFIG_RBD 00:06:42.409 #define SPDK_CONFIG_RDMA 1 00:06:42.409 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:42.409 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:42.409 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:42.409 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:42.409 #define SPDK_CONFIG_SHARED 1 00:06:42.409 #undef SPDK_CONFIG_SMA 00:06:42.409 #define SPDK_CONFIG_TESTS 1 00:06:42.409 #undef SPDK_CONFIG_TSAN 00:06:42.409 #define SPDK_CONFIG_UBLK 1 00:06:42.409 #define SPDK_CONFIG_UBSAN 1 00:06:42.409 #undef SPDK_CONFIG_UNIT_TESTS 00:06:42.409 #undef SPDK_CONFIG_URING 00:06:42.409 #define SPDK_CONFIG_URING_PATH 00:06:42.409 #undef SPDK_CONFIG_URING_ZNS 00:06:42.409 #define SPDK_CONFIG_USDT 1 00:06:42.409 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:42.409 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:42.409 #define SPDK_CONFIG_VFIO_USER 1 00:06:42.409 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:42.409 #define SPDK_CONFIG_VHOST 1 00:06:42.409 #define SPDK_CONFIG_VIRTIO 1 00:06:42.409 #undef SPDK_CONFIG_VTUNE 00:06:42.409 #define SPDK_CONFIG_VTUNE_DIR 00:06:42.409 #define SPDK_CONFIG_WERROR 1 00:06:42.409 #define SPDK_CONFIG_WPDK_DIR 00:06:42.409 #undef SPDK_CONFIG_XNVME 00:06:42.410 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:42.410 19:07:18 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:42.410 19:07:18 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:42.410 19:07:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.410 19:07:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.410 19:07:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.410 19:07:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.410 19:07:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.410 19:07:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.410 19:07:18 -- paths/export.sh@5 -- # export PATH 00:06:42.410 19:07:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.410 19:07:18 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:42.410 19:07:18 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:42.410 19:07:18 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:42.410 19:07:18 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:42.410 19:07:18 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:06:42.410 19:07:18 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:06:42.410 19:07:18 -- pm/common@16 -- # TEST_TAG=N/A 00:06:42.410 19:07:18 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:06:42.410 19:07:18 -- common/autotest_common.sh@52 -- # : 1 00:06:42.410 19:07:18 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:06:42.410 19:07:18 -- common/autotest_common.sh@56 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:42.410 19:07:18 -- common/autotest_common.sh@58 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:06:42.410 19:07:18 -- common/autotest_common.sh@60 -- # : 1 00:06:42.410 19:07:18 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:42.410 19:07:18 -- common/autotest_common.sh@62 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:06:42.410 19:07:18 -- common/autotest_common.sh@64 -- # : 00:06:42.410 19:07:18 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:06:42.410 19:07:18 -- common/autotest_common.sh@66 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:06:42.410 19:07:18 -- common/autotest_common.sh@68 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:06:42.410 19:07:18 -- common/autotest_common.sh@70 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:06:42.410 19:07:18 -- common/autotest_common.sh@72 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:42.410 19:07:18 -- common/autotest_common.sh@74 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:06:42.410 19:07:18 -- common/autotest_common.sh@76 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:06:42.410 19:07:18 -- common/autotest_common.sh@78 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:06:42.410 19:07:18 -- common/autotest_common.sh@80 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:06:42.410 19:07:18 -- common/autotest_common.sh@82 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:06:42.410 19:07:18 -- common/autotest_common.sh@84 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:06:42.410 19:07:18 -- common/autotest_common.sh@86 -- # : 1 00:06:42.410 19:07:18 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:06:42.410 19:07:18 -- common/autotest_common.sh@88 -- # : 1 00:06:42.410 19:07:18 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:06:42.410 19:07:18 -- common/autotest_common.sh@90 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:42.410 19:07:18 -- common/autotest_common.sh@92 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:06:42.410 19:07:18 -- common/autotest_common.sh@94 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:06:42.410 19:07:18 -- common/autotest_common.sh@96 -- # : tcp 00:06:42.410 19:07:18 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:42.410 19:07:18 -- common/autotest_common.sh@98 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:06:42.410 19:07:18 -- common/autotest_common.sh@100 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:06:42.410 19:07:18 -- common/autotest_common.sh@102 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:06:42.410 19:07:18 -- common/autotest_common.sh@104 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:06:42.410 19:07:18 -- common/autotest_common.sh@106 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:06:42.410 19:07:18 -- common/autotest_common.sh@108 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:06:42.410 19:07:18 -- common/autotest_common.sh@110 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:06:42.410 19:07:18 -- common/autotest_common.sh@112 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:42.410 19:07:18 -- common/autotest_common.sh@114 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:06:42.410 19:07:18 -- common/autotest_common.sh@116 -- # : 1 00:06:42.410 19:07:18 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:06:42.410 19:07:18 -- common/autotest_common.sh@118 -- # : 00:06:42.410 19:07:18 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:42.410 19:07:18 -- common/autotest_common.sh@120 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:06:42.410 19:07:18 -- common/autotest_common.sh@122 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:06:42.410 19:07:18 -- common/autotest_common.sh@124 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:06:42.410 19:07:18 -- common/autotest_common.sh@126 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:06:42.410 19:07:18 -- common/autotest_common.sh@128 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:06:42.410 19:07:18 -- common/autotest_common.sh@130 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:06:42.410 19:07:18 -- common/autotest_common.sh@132 -- # : 00:06:42.410 19:07:18 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:06:42.410 19:07:18 -- common/autotest_common.sh@134 -- # : true 00:06:42.410 19:07:18 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:06:42.410 19:07:18 -- common/autotest_common.sh@136 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:06:42.410 19:07:18 -- common/autotest_common.sh@138 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:06:42.410 19:07:18 -- common/autotest_common.sh@140 -- # : 1 00:06:42.410 19:07:18 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:06:42.410 19:07:18 -- common/autotest_common.sh@142 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:06:42.410 19:07:18 -- common/autotest_common.sh@144 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:06:42.410 19:07:18 -- common/autotest_common.sh@146 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:06:42.410 19:07:18 -- common/autotest_common.sh@148 -- # : 00:06:42.410 19:07:18 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:06:42.410 19:07:18 -- common/autotest_common.sh@150 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:06:42.410 19:07:18 -- common/autotest_common.sh@152 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:06:42.410 19:07:18 -- common/autotest_common.sh@154 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:06:42.410 19:07:18 -- common/autotest_common.sh@156 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:06:42.410 19:07:18 -- common/autotest_common.sh@158 -- # : 0 00:06:42.410 19:07:18 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:06:42.410 19:07:18 -- common/autotest_common.sh@161 -- # : 00:06:42.410 19:07:18 -- common/autotest_common.sh@162 -- # export SPDK_TEST_FUZZER_TARGET 00:06:42.410 19:07:18 -- common/autotest_common.sh@163 -- # : 1 00:06:42.410 19:07:18 -- common/autotest_common.sh@164 -- # export SPDK_TEST_NVMF_MDNS 00:06:42.410 19:07:18 -- common/autotest_common.sh@165 -- # : 1 00:06:42.410 19:07:18 -- common/autotest_common.sh@166 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:42.410 19:07:18 -- common/autotest_common.sh@169 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:42.411 19:07:18 -- common/autotest_common.sh@169 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:42.411 19:07:18 -- common/autotest_common.sh@170 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:42.411 19:07:18 -- common/autotest_common.sh@170 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:42.411 19:07:18 -- common/autotest_common.sh@171 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:42.411 19:07:18 -- common/autotest_common.sh@171 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:42.411 19:07:18 -- common/autotest_common.sh@172 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:42.411 19:07:18 -- common/autotest_common.sh@172 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:42.411 19:07:18 -- common/autotest_common.sh@175 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:42.411 19:07:18 -- common/autotest_common.sh@175 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:42.411 19:07:18 -- common/autotest_common.sh@179 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:42.411 19:07:18 -- common/autotest_common.sh@179 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:42.411 19:07:18 -- common/autotest_common.sh@183 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:42.411 19:07:18 -- common/autotest_common.sh@183 -- # PYTHONDONTWRITEBYTECODE=1 00:06:42.411 19:07:18 -- common/autotest_common.sh@187 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:42.411 19:07:18 -- common/autotest_common.sh@187 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:42.411 19:07:18 -- common/autotest_common.sh@188 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:42.411 19:07:18 -- common/autotest_common.sh@188 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:42.411 19:07:18 -- common/autotest_common.sh@192 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:42.411 19:07:18 -- common/autotest_common.sh@193 -- # rm -rf /var/tmp/asan_suppression_file 00:06:42.411 19:07:18 -- common/autotest_common.sh@194 -- # cat 00:06:42.411 19:07:18 -- common/autotest_common.sh@220 -- # echo leak:libfuse3.so 00:06:42.411 19:07:18 -- common/autotest_common.sh@222 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:42.411 19:07:18 -- common/autotest_common.sh@222 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:42.411 19:07:18 -- common/autotest_common.sh@224 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:42.411 19:07:18 -- common/autotest_common.sh@224 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:42.411 19:07:18 -- common/autotest_common.sh@226 -- # '[' -z /var/spdk/dependencies ']' 00:06:42.411 19:07:18 -- common/autotest_common.sh@229 -- # export DEPENDENCY_DIR 00:06:42.411 19:07:18 -- common/autotest_common.sh@233 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:42.411 19:07:18 -- common/autotest_common.sh@233 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:42.411 19:07:18 -- common/autotest_common.sh@234 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:42.411 19:07:18 -- common/autotest_common.sh@234 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:42.411 19:07:18 -- common/autotest_common.sh@237 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:42.411 19:07:18 -- common/autotest_common.sh@237 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:42.411 19:07:18 -- common/autotest_common.sh@238 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:42.411 19:07:18 -- common/autotest_common.sh@238 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:42.411 19:07:18 -- common/autotest_common.sh@240 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:42.411 19:07:18 -- common/autotest_common.sh@240 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:42.411 19:07:18 -- common/autotest_common.sh@243 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:42.411 19:07:18 -- common/autotest_common.sh@243 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:42.411 19:07:18 -- common/autotest_common.sh@246 -- # '[' 0 -eq 0 ']' 00:06:42.411 19:07:18 -- common/autotest_common.sh@247 -- # export valgrind= 00:06:42.411 19:07:18 -- common/autotest_common.sh@247 -- # valgrind= 00:06:42.411 19:07:18 -- common/autotest_common.sh@253 -- # uname -s 00:06:42.411 19:07:18 -- common/autotest_common.sh@253 -- # '[' Linux = Linux ']' 00:06:42.411 19:07:18 -- common/autotest_common.sh@254 -- # HUGEMEM=4096 00:06:42.411 19:07:18 -- common/autotest_common.sh@255 -- # export CLEAR_HUGE=yes 00:06:42.411 19:07:18 -- common/autotest_common.sh@255 -- # CLEAR_HUGE=yes 00:06:42.411 19:07:18 -- common/autotest_common.sh@256 -- # [[ 0 -eq 1 ]] 00:06:42.411 19:07:18 -- common/autotest_common.sh@256 -- # [[ 0 -eq 1 ]] 00:06:42.411 19:07:18 -- common/autotest_common.sh@263 -- # MAKE=make 00:06:42.411 19:07:18 -- common/autotest_common.sh@264 -- # MAKEFLAGS=-j10 00:06:42.411 19:07:18 -- common/autotest_common.sh@280 -- # export HUGEMEM=4096 00:06:42.411 19:07:18 -- common/autotest_common.sh@280 -- # HUGEMEM=4096 00:06:42.411 19:07:18 -- common/autotest_common.sh@282 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:06:42.411 19:07:18 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:06:42.411 19:07:18 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:06:42.411 19:07:18 -- common/autotest_common.sh@289 -- # for i in "$@" 00:06:42.411 19:07:18 -- common/autotest_common.sh@290 -- # case "$i" in 00:06:42.411 19:07:18 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:06:42.411 19:07:18 -- common/autotest_common.sh@307 -- # [[ -z 60571 ]] 00:06:42.411 19:07:18 -- common/autotest_common.sh@307 -- # kill -0 60571 00:06:42.411 19:07:18 -- common/autotest_common.sh@1663 -- # set_test_storage 2147483648 00:06:42.411 19:07:18 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:06:42.411 19:07:18 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:06:42.411 19:07:18 -- common/autotest_common.sh@320 -- # local mount target_dir 00:06:42.411 19:07:18 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:06:42.411 19:07:18 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:06:42.411 19:07:18 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:06:42.411 19:07:18 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:06:42.411 19:07:18 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.2EN5ZW 00:06:42.411 19:07:18 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:42.411 19:07:18 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:06:42.411 19:07:18 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:06:42.411 19:07:18 -- common/autotest_common.sh@344 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.2EN5ZW/tests/target /tmp/spdk.2EN5ZW 00:06:42.411 19:07:18 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:06:42.411 19:07:18 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:42.411 19:07:18 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:06:42.411 19:07:18 -- common/autotest_common.sh@316 -- # df -T 00:06:42.411 19:07:18 -- common/autotest_common.sh@350 -- # mounts["$mount"]=devtmpfs 00:06:42.411 19:07:18 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:06:42.411 19:07:18 -- common/autotest_common.sh@351 -- # avails["$mount"]=4194304 00:06:42.411 19:07:18 -- common/autotest_common.sh@351 -- # sizes["$mount"]=4194304 00:06:42.411 19:07:18 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:06:42.411 19:07:18 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:42.411 19:07:18 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:42.411 19:07:18 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:42.411 19:07:18 -- common/autotest_common.sh@351 -- # avails["$mount"]=6266634240 00:06:42.411 19:07:18 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6267891712 00:06:42.411 19:07:18 -- common/autotest_common.sh@352 -- # uses["$mount"]=1257472 00:06:42.411 19:07:18 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:42.411 19:07:18 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:42.411 19:07:18 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:42.411 19:07:18 -- common/autotest_common.sh@351 -- # avails["$mount"]=2494353408 00:06:42.411 19:07:18 -- common/autotest_common.sh@351 -- # sizes["$mount"]=2507157504 00:06:42.411 19:07:18 -- common/autotest_common.sh@352 -- # uses["$mount"]=12804096 00:06:42.411 19:07:18 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:42.411 19:07:18 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda5 00:06:42.411 19:07:18 -- common/autotest_common.sh@350 -- # fss["$mount"]=btrfs 00:06:42.411 19:07:18 -- common/autotest_common.sh@351 -- # avails["$mount"]=13817528320 00:06:42.411 19:07:18 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20314062848 00:06:42.411 19:07:18 -- common/autotest_common.sh@352 -- # uses["$mount"]=5206736896 00:06:42.411 19:07:18 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:42.411 19:07:18 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda5 00:06:42.411 19:07:18 -- common/autotest_common.sh@350 -- # fss["$mount"]=btrfs 00:06:42.411 19:07:18 -- common/autotest_common.sh@351 -- # avails["$mount"]=13817528320 00:06:42.411 19:07:18 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20314062848 00:06:42.411 19:07:18 -- common/autotest_common.sh@352 -- # uses["$mount"]=5206736896 00:06:42.411 19:07:18 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:42.411 19:07:18 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda2 00:06:42.411 19:07:18 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext4 00:06:42.411 19:07:18 -- common/autotest_common.sh@351 -- # avails["$mount"]=843546624 00:06:42.411 19:07:18 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1012768768 00:06:42.411 19:07:18 -- common/autotest_common.sh@352 -- # uses["$mount"]=100016128 00:06:42.411 19:07:18 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:42.411 19:07:18 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda3 00:06:42.411 19:07:18 -- common/autotest_common.sh@350 -- # fss["$mount"]=vfat 00:06:42.412 19:07:18 -- common/autotest_common.sh@351 -- # avails["$mount"]=92499968 00:06:42.412 19:07:18 -- common/autotest_common.sh@351 -- # sizes["$mount"]=104607744 00:06:42.412 19:07:18 -- common/autotest_common.sh@352 -- # uses["$mount"]=12107776 00:06:42.412 19:07:18 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:42.412 19:07:18 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:42.412 19:07:18 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:42.412 19:07:18 -- common/autotest_common.sh@351 -- # avails["$mount"]=6267756544 00:06:42.412 19:07:18 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6267891712 00:06:42.412 19:07:18 -- common/autotest_common.sh@352 -- # uses["$mount"]=135168 00:06:42.412 19:07:18 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:42.412 19:07:18 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:42.412 19:07:18 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:42.412 19:07:18 -- common/autotest_common.sh@351 -- # avails["$mount"]=1253572608 00:06:42.412 19:07:18 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1253576704 00:06:42.412 19:07:18 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:06:42.412 19:07:18 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:42.412 19:07:18 -- common/autotest_common.sh@350 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:06:42.412 19:07:18 -- common/autotest_common.sh@350 -- # fss["$mount"]=fuse.sshfs 00:06:42.412 19:07:18 -- common/autotest_common.sh@351 -- # avails["$mount"]=97419321344 00:06:42.412 19:07:18 -- common/autotest_common.sh@351 -- # sizes["$mount"]=105088212992 00:06:42.412 19:07:18 -- common/autotest_common.sh@352 -- # uses["$mount"]=2283458560 00:06:42.412 19:07:18 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:42.412 19:07:18 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:06:42.412 * Looking for test storage... 00:06:42.412 19:07:18 -- common/autotest_common.sh@357 -- # local target_space new_size 00:06:42.412 19:07:18 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:06:42.412 19:07:18 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:42.412 19:07:18 -- common/autotest_common.sh@361 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:42.412 19:07:18 -- common/autotest_common.sh@361 -- # mount=/home 00:06:42.412 19:07:18 -- common/autotest_common.sh@363 -- # target_space=13817528320 00:06:42.412 19:07:18 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:06:42.412 19:07:18 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:06:42.412 19:07:18 -- common/autotest_common.sh@369 -- # [[ btrfs == tmpfs ]] 00:06:42.412 19:07:18 -- common/autotest_common.sh@369 -- # [[ btrfs == ramfs ]] 00:06:42.412 19:07:18 -- common/autotest_common.sh@369 -- # [[ /home == / ]] 00:06:42.412 19:07:18 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:42.412 19:07:18 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:42.412 19:07:18 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:42.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:42.412 19:07:18 -- common/autotest_common.sh@378 -- # return 0 00:06:42.412 19:07:18 -- common/autotest_common.sh@1665 -- # set -o errtrace 00:06:42.412 19:07:18 -- common/autotest_common.sh@1666 -- # shopt -s extdebug 00:06:42.412 19:07:18 -- common/autotest_common.sh@1667 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:42.412 19:07:18 -- common/autotest_common.sh@1669 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:42.412 19:07:18 -- common/autotest_common.sh@1670 -- # true 00:06:42.412 19:07:18 -- common/autotest_common.sh@1672 -- # xtrace_fd 00:06:42.412 19:07:18 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:42.412 19:07:18 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:42.412 19:07:18 -- common/autotest_common.sh@27 -- # exec 00:06:42.412 19:07:18 -- common/autotest_common.sh@29 -- # exec 00:06:42.412 19:07:18 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:42.412 19:07:18 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:42.412 19:07:18 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:42.412 19:07:18 -- common/autotest_common.sh@18 -- # set -x 00:06:42.412 19:07:18 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:42.412 19:07:18 -- nvmf/common.sh@7 -- # uname -s 00:06:42.412 19:07:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.412 19:07:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.412 19:07:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.412 19:07:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.412 19:07:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.412 19:07:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.412 19:07:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.412 19:07:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.412 19:07:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.412 19:07:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.412 19:07:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:06:42.412 19:07:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:06:42.412 19:07:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.412 19:07:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.412 19:07:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:42.412 19:07:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:42.412 19:07:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.412 19:07:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.412 19:07:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.412 19:07:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.412 19:07:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.412 19:07:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.412 19:07:18 -- paths/export.sh@5 -- # export PATH 00:06:42.412 19:07:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.412 19:07:18 -- nvmf/common.sh@46 -- # : 0 00:06:42.412 19:07:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:42.412 19:07:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:42.412 19:07:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:42.412 19:07:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.412 19:07:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.412 19:07:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:42.412 19:07:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:42.412 19:07:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:42.412 19:07:18 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:42.412 19:07:18 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:42.412 19:07:18 -- target/filesystem.sh@15 -- # nvmftestinit 00:06:42.412 19:07:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:42.412 19:07:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:42.412 19:07:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:42.412 19:07:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:42.412 19:07:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:42.412 19:07:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.412 19:07:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:42.412 19:07:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.412 19:07:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:06:42.412 19:07:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:06:42.412 19:07:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:06:42.412 19:07:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:06:42.412 19:07:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:06:42.412 19:07:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:06:42.412 19:07:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:42.412 19:07:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:42.412 19:07:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:42.412 19:07:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:06:42.412 19:07:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:42.412 19:07:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:42.412 19:07:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:42.412 19:07:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:42.412 19:07:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:42.412 19:07:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:42.412 19:07:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:42.412 19:07:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:42.412 19:07:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:06:42.412 19:07:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:06:42.412 Cannot find device "nvmf_tgt_br" 00:06:42.412 19:07:18 -- nvmf/common.sh@154 -- # true 00:06:42.412 19:07:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:06:42.412 Cannot find device "nvmf_tgt_br2" 00:06:42.412 19:07:18 -- nvmf/common.sh@155 -- # true 00:06:42.413 19:07:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:06:42.413 19:07:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:06:42.413 Cannot find device "nvmf_tgt_br" 00:06:42.413 19:07:18 -- nvmf/common.sh@157 -- # true 00:06:42.413 19:07:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:06:42.413 Cannot find device "nvmf_tgt_br2" 00:06:42.413 19:07:18 -- nvmf/common.sh@158 -- # true 00:06:42.413 19:07:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:06:42.413 19:07:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:06:42.413 19:07:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:42.413 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:42.413 19:07:18 -- nvmf/common.sh@161 -- # true 00:06:42.413 19:07:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:42.413 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:42.413 19:07:18 -- nvmf/common.sh@162 -- # true 00:06:42.413 19:07:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:06:42.413 19:07:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:42.413 19:07:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:42.413 19:07:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:42.413 19:07:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:42.413 19:07:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:42.413 19:07:18 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:42.413 19:07:18 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:42.413 19:07:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:42.413 19:07:18 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:06:42.413 19:07:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:06:42.413 19:07:18 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:06:42.413 19:07:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:06:42.413 19:07:18 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:42.413 19:07:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:42.413 19:07:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:42.413 19:07:18 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:06:42.413 19:07:18 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:06:42.413 19:07:18 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:06:42.413 19:07:18 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:42.413 19:07:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:42.413 19:07:18 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:42.413 19:07:18 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:42.413 19:07:18 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:06:42.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:42.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:06:42.413 00:06:42.413 --- 10.0.0.2 ping statistics --- 00:06:42.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.413 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:06:42.413 19:07:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:06:42.413 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:42.413 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:06:42.413 00:06:42.413 --- 10.0.0.3 ping statistics --- 00:06:42.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.413 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:06:42.413 19:07:18 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:42.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:42.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:06:42.413 00:06:42.413 --- 10.0.0.1 ping statistics --- 00:06:42.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.413 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:06:42.413 19:07:18 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:42.413 19:07:18 -- nvmf/common.sh@421 -- # return 0 00:06:42.413 19:07:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:42.413 19:07:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:42.413 19:07:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:42.413 19:07:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:42.413 19:07:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:42.413 19:07:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:42.413 19:07:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:42.413 19:07:18 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:42.413 19:07:18 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:06:42.413 19:07:18 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:42.413 19:07:18 -- common/autotest_common.sh@10 -- # set +x 00:06:42.413 ************************************ 00:06:42.413 START TEST nvmf_filesystem_no_in_capsule 00:06:42.413 ************************************ 00:06:42.413 19:07:18 -- common/autotest_common.sh@1102 -- # nvmf_filesystem_part 0 00:06:42.413 19:07:18 -- target/filesystem.sh@47 -- # in_capsule=0 00:06:42.413 19:07:18 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:42.413 19:07:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:06:42.413 19:07:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:42.413 19:07:18 -- common/autotest_common.sh@10 -- # set +x 00:06:42.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.413 19:07:18 -- nvmf/common.sh@469 -- # nvmfpid=60727 00:06:42.413 19:07:18 -- nvmf/common.sh@470 -- # waitforlisten 60727 00:06:42.413 19:07:18 -- common/autotest_common.sh@817 -- # '[' -z 60727 ']' 00:06:42.413 19:07:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.413 19:07:18 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:42.413 19:07:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:42.413 19:07:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.413 19:07:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:42.413 19:07:18 -- common/autotest_common.sh@10 -- # set +x 00:06:42.413 [2024-02-14 19:07:19.025217] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:42.413 [2024-02-14 19:07:19.025332] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.413 [2024-02-14 19:07:19.167735] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.413 [2024-02-14 19:07:19.323663] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:42.413 [2024-02-14 19:07:19.324107] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:42.413 [2024-02-14 19:07:19.324164] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:42.413 [2024-02-14 19:07:19.324436] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:42.413 [2024-02-14 19:07:19.324635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.413 [2024-02-14 19:07:19.325299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.413 [2024-02-14 19:07:19.325303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.413 [2024-02-14 19:07:19.325314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.672 19:07:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:42.672 19:07:19 -- common/autotest_common.sh@850 -- # return 0 00:06:42.672 19:07:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:06:42.672 19:07:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:42.672 19:07:19 -- common/autotest_common.sh@10 -- # set +x 00:06:42.672 19:07:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:42.672 19:07:20 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:42.672 19:07:20 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:42.672 19:07:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:42.672 19:07:20 -- common/autotest_common.sh@10 -- # set +x 00:06:42.672 [2024-02-14 19:07:20.014119] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.672 19:07:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:42.672 19:07:20 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:42.672 19:07:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:42.672 19:07:20 -- common/autotest_common.sh@10 -- # set +x 00:06:42.931 Malloc1 00:06:42.931 19:07:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:42.931 19:07:20 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:42.931 19:07:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:42.931 19:07:20 -- common/autotest_common.sh@10 -- # set +x 00:06:42.931 19:07:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:42.931 19:07:20 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:42.931 19:07:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:42.931 19:07:20 -- common/autotest_common.sh@10 -- # set +x 00:06:42.931 19:07:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:42.931 19:07:20 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:42.931 19:07:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:42.931 19:07:20 -- common/autotest_common.sh@10 -- # set +x 00:06:42.931 [2024-02-14 19:07:20.259109] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:42.931 19:07:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:42.931 19:07:20 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:42.931 19:07:20 -- common/autotest_common.sh@1355 -- # local bdev_name=Malloc1 00:06:42.931 19:07:20 -- common/autotest_common.sh@1356 -- # local bdev_info 00:06:42.931 19:07:20 -- common/autotest_common.sh@1357 -- # local bs 00:06:42.931 19:07:20 -- common/autotest_common.sh@1358 -- # local nb 00:06:42.931 19:07:20 -- common/autotest_common.sh@1359 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:42.931 19:07:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:42.931 19:07:20 -- common/autotest_common.sh@10 -- # set +x 00:06:42.931 19:07:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:42.931 19:07:20 -- common/autotest_common.sh@1359 -- # bdev_info='[ 00:06:42.931 { 00:06:42.931 "aliases": [ 00:06:42.931 "de3addff-cb76-4646-be4a-a9e8bd13cf31" 00:06:42.931 ], 00:06:42.931 "assigned_rate_limits": { 00:06:42.931 "r_mbytes_per_sec": 0, 00:06:42.931 "rw_ios_per_sec": 0, 00:06:42.931 "rw_mbytes_per_sec": 0, 00:06:42.931 "w_mbytes_per_sec": 0 00:06:42.931 }, 00:06:42.931 "block_size": 512, 00:06:42.931 "claim_type": "exclusive_write", 00:06:42.931 "claimed": true, 00:06:42.931 "driver_specific": {}, 00:06:42.931 "memory_domains": [ 00:06:42.931 { 00:06:42.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:42.931 "dma_device_type": 2 00:06:42.931 } 00:06:42.931 ], 00:06:42.931 "name": "Malloc1", 00:06:42.931 "num_blocks": 1048576, 00:06:42.931 "product_name": "Malloc disk", 00:06:42.931 "supported_io_types": { 00:06:42.931 "abort": true, 00:06:42.931 "compare": false, 00:06:42.931 "compare_and_write": false, 00:06:42.931 "flush": true, 00:06:42.931 "nvme_admin": false, 00:06:42.931 "nvme_io": false, 00:06:42.931 "read": true, 00:06:42.931 "reset": true, 00:06:42.931 "unmap": true, 00:06:42.931 "write": true, 00:06:42.931 "write_zeroes": true 00:06:42.931 }, 00:06:42.931 "uuid": "de3addff-cb76-4646-be4a-a9e8bd13cf31", 00:06:42.931 "zoned": false 00:06:42.931 } 00:06:42.931 ]' 00:06:42.931 19:07:20 -- common/autotest_common.sh@1360 -- # jq '.[] .block_size' 00:06:42.931 19:07:20 -- common/autotest_common.sh@1360 -- # bs=512 00:06:42.931 19:07:20 -- common/autotest_common.sh@1361 -- # jq '.[] .num_blocks' 00:06:43.191 19:07:20 -- common/autotest_common.sh@1361 -- # nb=1048576 00:06:43.191 19:07:20 -- common/autotest_common.sh@1364 -- # bdev_size=512 00:06:43.191 19:07:20 -- common/autotest_common.sh@1365 -- # echo 512 00:06:43.191 19:07:20 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:43.191 19:07:20 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:43.191 19:07:20 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:43.191 19:07:20 -- common/autotest_common.sh@1175 -- # local i=0 00:06:43.191 19:07:20 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:06:43.191 19:07:20 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:06:43.191 19:07:20 -- common/autotest_common.sh@1182 -- # sleep 2 00:06:45.775 19:07:22 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:06:45.775 19:07:22 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:06:45.775 19:07:22 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:06:45.775 19:07:22 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:06:45.775 19:07:22 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:06:45.775 19:07:22 -- common/autotest_common.sh@1185 -- # return 0 00:06:45.775 19:07:22 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:45.775 19:07:22 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:45.775 19:07:22 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:45.775 19:07:22 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:45.775 19:07:22 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:45.775 19:07:22 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:45.775 19:07:22 -- setup/common.sh@80 -- # echo 536870912 00:06:45.775 19:07:22 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:45.775 19:07:22 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:45.775 19:07:22 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:45.775 19:07:22 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:45.775 19:07:22 -- target/filesystem.sh@69 -- # partprobe 00:06:45.775 19:07:22 -- target/filesystem.sh@70 -- # sleep 1 00:06:46.361 19:07:23 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:46.361 19:07:23 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:46.361 19:07:23 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:06:46.361 19:07:23 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:46.361 19:07:23 -- common/autotest_common.sh@10 -- # set +x 00:06:46.361 ************************************ 00:06:46.361 START TEST filesystem_ext4 00:06:46.361 ************************************ 00:06:46.361 19:07:23 -- common/autotest_common.sh@1102 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:46.361 19:07:23 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:46.361 19:07:23 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:46.361 19:07:23 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:46.361 19:07:23 -- common/autotest_common.sh@900 -- # local fstype=ext4 00:06:46.361 19:07:23 -- common/autotest_common.sh@901 -- # local dev_name=/dev/nvme0n1p1 00:06:46.361 19:07:23 -- common/autotest_common.sh@902 -- # local i=0 00:06:46.361 19:07:23 -- common/autotest_common.sh@903 -- # local force 00:06:46.361 19:07:23 -- common/autotest_common.sh@905 -- # '[' ext4 = ext4 ']' 00:06:46.361 19:07:23 -- common/autotest_common.sh@906 -- # force=-F 00:06:46.361 19:07:23 -- common/autotest_common.sh@911 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:46.361 mke2fs 1.46.5 (30-Dec-2021) 00:06:46.620 Discarding device blocks: 0/522240 done 00:06:46.620 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:46.620 Filesystem UUID: 4e20729f-2c2f-46b8-a138-4ff60d9b424f 00:06:46.620 Superblock backups stored on blocks: 00:06:46.620 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:46.620 00:06:46.620 Allocating group tables: 0/64 done 00:06:46.620 Writing inode tables: 0/64 done 00:06:46.620 Creating journal (8192 blocks): done 00:06:46.620 Writing superblocks and filesystem accounting information: 0/64 done 00:06:46.620 00:06:46.620 19:07:23 -- common/autotest_common.sh@919 -- # return 0 00:06:46.620 19:07:23 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:46.620 19:07:23 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:46.620 19:07:24 -- target/filesystem.sh@25 -- # sync 00:06:46.879 19:07:24 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:46.879 19:07:24 -- target/filesystem.sh@27 -- # sync 00:06:46.879 19:07:24 -- target/filesystem.sh@29 -- # i=0 00:06:46.879 19:07:24 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:46.879 19:07:24 -- target/filesystem.sh@37 -- # kill -0 60727 00:06:46.879 19:07:24 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:46.879 19:07:24 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:46.879 19:07:24 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:46.879 19:07:24 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:46.879 ************************************ 00:06:46.879 END TEST filesystem_ext4 00:06:46.879 ************************************ 00:06:46.879 00:06:46.879 real 0m0.362s 00:06:46.879 user 0m0.027s 00:06:46.879 sys 0m0.051s 00:06:46.879 19:07:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.879 19:07:24 -- common/autotest_common.sh@10 -- # set +x 00:06:46.879 19:07:24 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:46.879 19:07:24 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:06:46.879 19:07:24 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:46.879 19:07:24 -- common/autotest_common.sh@10 -- # set +x 00:06:46.879 ************************************ 00:06:46.879 START TEST filesystem_btrfs 00:06:46.879 ************************************ 00:06:46.879 19:07:24 -- common/autotest_common.sh@1102 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:46.879 19:07:24 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:46.879 19:07:24 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:46.879 19:07:24 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:46.879 19:07:24 -- common/autotest_common.sh@900 -- # local fstype=btrfs 00:06:46.879 19:07:24 -- common/autotest_common.sh@901 -- # local dev_name=/dev/nvme0n1p1 00:06:46.879 19:07:24 -- common/autotest_common.sh@902 -- # local i=0 00:06:46.880 19:07:24 -- common/autotest_common.sh@903 -- # local force 00:06:46.880 19:07:24 -- common/autotest_common.sh@905 -- # '[' btrfs = ext4 ']' 00:06:46.880 19:07:24 -- common/autotest_common.sh@908 -- # force=-f 00:06:46.880 19:07:24 -- common/autotest_common.sh@911 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:47.139 btrfs-progs v6.6.2 00:06:47.139 See https://btrfs.readthedocs.io for more information. 00:06:47.139 00:06:47.139 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:47.139 NOTE: several default settings have changed in version 5.15, please make sure 00:06:47.139 this does not affect your deployments: 00:06:47.139 - DUP for metadata (-m dup) 00:06:47.139 - enabled no-holes (-O no-holes) 00:06:47.139 - enabled free-space-tree (-R free-space-tree) 00:06:47.139 00:06:47.139 Label: (null) 00:06:47.139 UUID: 9465a487-1c65-4b1c-a901-0208cdce3993 00:06:47.139 Node size: 16384 00:06:47.139 Sector size: 4096 00:06:47.139 Filesystem size: 510.00MiB 00:06:47.139 Block group profiles: 00:06:47.139 Data: single 8.00MiB 00:06:47.139 Metadata: DUP 32.00MiB 00:06:47.139 System: DUP 8.00MiB 00:06:47.139 SSD detected: yes 00:06:47.139 Zoned device: no 00:06:47.139 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:47.139 Runtime features: free-space-tree 00:06:47.139 Checksum: crc32c 00:06:47.139 Number of devices: 1 00:06:47.139 Devices: 00:06:47.139 ID SIZE PATH 00:06:47.139 1 510.00MiB /dev/nvme0n1p1 00:06:47.139 00:06:47.139 19:07:24 -- common/autotest_common.sh@919 -- # return 0 00:06:47.139 19:07:24 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:47.139 19:07:24 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:47.139 19:07:24 -- target/filesystem.sh@25 -- # sync 00:06:47.139 19:07:24 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:47.139 19:07:24 -- target/filesystem.sh@27 -- # sync 00:06:47.139 19:07:24 -- target/filesystem.sh@29 -- # i=0 00:06:47.139 19:07:24 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:47.139 19:07:24 -- target/filesystem.sh@37 -- # kill -0 60727 00:06:47.139 19:07:24 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:47.139 19:07:24 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:47.139 19:07:24 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:47.139 19:07:24 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:47.139 ************************************ 00:06:47.139 END TEST filesystem_btrfs 00:06:47.139 ************************************ 00:06:47.139 00:06:47.139 real 0m0.224s 00:06:47.139 user 0m0.016s 00:06:47.139 sys 0m0.068s 00:06:47.139 19:07:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.139 19:07:24 -- common/autotest_common.sh@10 -- # set +x 00:06:47.139 19:07:24 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:47.139 19:07:24 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:06:47.139 19:07:24 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:47.139 19:07:24 -- common/autotest_common.sh@10 -- # set +x 00:06:47.139 ************************************ 00:06:47.139 START TEST filesystem_xfs 00:06:47.139 ************************************ 00:06:47.139 19:07:24 -- common/autotest_common.sh@1102 -- # nvmf_filesystem_create xfs nvme0n1 00:06:47.139 19:07:24 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:47.139 19:07:24 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:47.139 19:07:24 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:47.139 19:07:24 -- common/autotest_common.sh@900 -- # local fstype=xfs 00:06:47.139 19:07:24 -- common/autotest_common.sh@901 -- # local dev_name=/dev/nvme0n1p1 00:06:47.139 19:07:24 -- common/autotest_common.sh@902 -- # local i=0 00:06:47.139 19:07:24 -- common/autotest_common.sh@903 -- # local force 00:06:47.139 19:07:24 -- common/autotest_common.sh@905 -- # '[' xfs = ext4 ']' 00:06:47.139 19:07:24 -- common/autotest_common.sh@908 -- # force=-f 00:06:47.139 19:07:24 -- common/autotest_common.sh@911 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:47.139 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:47.139 = sectsz=512 attr=2, projid32bit=1 00:06:47.139 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:47.139 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:47.139 data = bsize=4096 blocks=130560, imaxpct=25 00:06:47.139 = sunit=0 swidth=0 blks 00:06:47.139 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:47.139 log =internal log bsize=4096 blocks=16384, version=2 00:06:47.139 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:47.139 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:48.075 Discarding blocks...Done. 00:06:48.075 19:07:25 -- common/autotest_common.sh@919 -- # return 0 00:06:48.075 19:07:25 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:50.608 19:07:27 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:50.608 19:07:27 -- target/filesystem.sh@25 -- # sync 00:06:50.608 19:07:27 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:50.608 19:07:27 -- target/filesystem.sh@27 -- # sync 00:06:50.608 19:07:27 -- target/filesystem.sh@29 -- # i=0 00:06:50.608 19:07:27 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:50.608 19:07:27 -- target/filesystem.sh@37 -- # kill -0 60727 00:06:50.608 19:07:27 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:50.608 19:07:27 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:50.608 19:07:27 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:50.608 19:07:27 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:50.608 ************************************ 00:06:50.608 END TEST filesystem_xfs 00:06:50.608 ************************************ 00:06:50.608 00:06:50.608 real 0m3.107s 00:06:50.608 user 0m0.017s 00:06:50.608 sys 0m0.064s 00:06:50.608 19:07:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.608 19:07:27 -- common/autotest_common.sh@10 -- # set +x 00:06:50.608 19:07:27 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:50.608 19:07:27 -- target/filesystem.sh@93 -- # sync 00:06:50.608 19:07:27 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:50.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:50.608 19:07:27 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:50.608 19:07:27 -- common/autotest_common.sh@1196 -- # local i=0 00:06:50.608 19:07:27 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:06:50.608 19:07:27 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:50.608 19:07:27 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:06:50.608 19:07:27 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:50.608 19:07:27 -- common/autotest_common.sh@1208 -- # return 0 00:06:50.608 19:07:27 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:50.608 19:07:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:50.608 19:07:27 -- common/autotest_common.sh@10 -- # set +x 00:06:50.608 19:07:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:50.608 19:07:27 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:50.608 19:07:27 -- target/filesystem.sh@101 -- # killprocess 60727 00:06:50.608 19:07:27 -- common/autotest_common.sh@924 -- # '[' -z 60727 ']' 00:06:50.608 19:07:27 -- common/autotest_common.sh@928 -- # kill -0 60727 00:06:50.608 19:07:27 -- common/autotest_common.sh@929 -- # uname 00:06:50.608 19:07:27 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:06:50.608 19:07:27 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 60727 00:06:50.608 killing process with pid 60727 00:06:50.608 19:07:27 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:06:50.608 19:07:27 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:06:50.608 19:07:27 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 60727' 00:06:50.608 19:07:27 -- common/autotest_common.sh@943 -- # kill 60727 00:06:50.608 19:07:27 -- common/autotest_common.sh@948 -- # wait 60727 00:06:51.175 ************************************ 00:06:51.175 END TEST nvmf_filesystem_no_in_capsule 00:06:51.175 ************************************ 00:06:51.175 19:07:28 -- target/filesystem.sh@102 -- # nvmfpid= 00:06:51.175 00:06:51.175 real 0m9.425s 00:06:51.175 user 0m35.328s 00:06:51.175 sys 0m1.459s 00:06:51.175 19:07:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.175 19:07:28 -- common/autotest_common.sh@10 -- # set +x 00:06:51.175 19:07:28 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:51.175 19:07:28 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:06:51.175 19:07:28 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:51.175 19:07:28 -- common/autotest_common.sh@10 -- # set +x 00:06:51.175 ************************************ 00:06:51.175 START TEST nvmf_filesystem_in_capsule 00:06:51.175 ************************************ 00:06:51.175 19:07:28 -- common/autotest_common.sh@1102 -- # nvmf_filesystem_part 4096 00:06:51.175 19:07:28 -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:51.175 19:07:28 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:51.175 19:07:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:06:51.175 19:07:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:51.175 19:07:28 -- common/autotest_common.sh@10 -- # set +x 00:06:51.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.175 19:07:28 -- nvmf/common.sh@469 -- # nvmfpid=61039 00:06:51.175 19:07:28 -- nvmf/common.sh@470 -- # waitforlisten 61039 00:06:51.175 19:07:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:51.175 19:07:28 -- common/autotest_common.sh@817 -- # '[' -z 61039 ']' 00:06:51.175 19:07:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.175 19:07:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:51.175 19:07:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.175 19:07:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:51.175 19:07:28 -- common/autotest_common.sh@10 -- # set +x 00:06:51.175 [2024-02-14 19:07:28.504597] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:51.175 [2024-02-14 19:07:28.505673] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.434 [2024-02-14 19:07:28.645571] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:51.434 [2024-02-14 19:07:28.798345] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:51.434 [2024-02-14 19:07:28.798869] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:51.434 [2024-02-14 19:07:28.799026] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:51.434 [2024-02-14 19:07:28.799176] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:51.434 [2024-02-14 19:07:28.799463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.434 [2024-02-14 19:07:28.799539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.434 [2024-02-14 19:07:28.799653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.434 [2024-02-14 19:07:28.799666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.369 19:07:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:52.369 19:07:29 -- common/autotest_common.sh@850 -- # return 0 00:06:52.369 19:07:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:06:52.369 19:07:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:52.369 19:07:29 -- common/autotest_common.sh@10 -- # set +x 00:06:52.369 19:07:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:52.369 19:07:29 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:52.369 19:07:29 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:52.369 19:07:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.369 19:07:29 -- common/autotest_common.sh@10 -- # set +x 00:06:52.369 [2024-02-14 19:07:29.494376] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:52.369 19:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.369 19:07:29 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:52.369 19:07:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.369 19:07:29 -- common/autotest_common.sh@10 -- # set +x 00:06:52.369 Malloc1 00:06:52.369 19:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.369 19:07:29 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:52.369 19:07:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.369 19:07:29 -- common/autotest_common.sh@10 -- # set +x 00:06:52.369 19:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.369 19:07:29 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:52.369 19:07:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.369 19:07:29 -- common/autotest_common.sh@10 -- # set +x 00:06:52.369 19:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.369 19:07:29 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:52.369 19:07:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.369 19:07:29 -- common/autotest_common.sh@10 -- # set +x 00:06:52.369 [2024-02-14 19:07:29.751567] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:52.369 19:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.369 19:07:29 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:52.369 19:07:29 -- common/autotest_common.sh@1355 -- # local bdev_name=Malloc1 00:06:52.369 19:07:29 -- common/autotest_common.sh@1356 -- # local bdev_info 00:06:52.369 19:07:29 -- common/autotest_common.sh@1357 -- # local bs 00:06:52.369 19:07:29 -- common/autotest_common.sh@1358 -- # local nb 00:06:52.369 19:07:29 -- common/autotest_common.sh@1359 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:52.369 19:07:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.369 19:07:29 -- common/autotest_common.sh@10 -- # set +x 00:06:52.369 19:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.369 19:07:29 -- common/autotest_common.sh@1359 -- # bdev_info='[ 00:06:52.369 { 00:06:52.369 "aliases": [ 00:06:52.369 "bacc2193-a4bf-40cf-a104-95dd321b9b0b" 00:06:52.369 ], 00:06:52.369 "assigned_rate_limits": { 00:06:52.369 "r_mbytes_per_sec": 0, 00:06:52.369 "rw_ios_per_sec": 0, 00:06:52.369 "rw_mbytes_per_sec": 0, 00:06:52.369 "w_mbytes_per_sec": 0 00:06:52.369 }, 00:06:52.369 "block_size": 512, 00:06:52.369 "claim_type": "exclusive_write", 00:06:52.369 "claimed": true, 00:06:52.369 "driver_specific": {}, 00:06:52.369 "memory_domains": [ 00:06:52.369 { 00:06:52.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.369 "dma_device_type": 2 00:06:52.369 } 00:06:52.369 ], 00:06:52.369 "name": "Malloc1", 00:06:52.369 "num_blocks": 1048576, 00:06:52.369 "product_name": "Malloc disk", 00:06:52.369 "supported_io_types": { 00:06:52.369 "abort": true, 00:06:52.369 "compare": false, 00:06:52.369 "compare_and_write": false, 00:06:52.369 "flush": true, 00:06:52.369 "nvme_admin": false, 00:06:52.369 "nvme_io": false, 00:06:52.369 "read": true, 00:06:52.369 "reset": true, 00:06:52.369 "unmap": true, 00:06:52.369 "write": true, 00:06:52.369 "write_zeroes": true 00:06:52.369 }, 00:06:52.369 "uuid": "bacc2193-a4bf-40cf-a104-95dd321b9b0b", 00:06:52.369 "zoned": false 00:06:52.369 } 00:06:52.369 ]' 00:06:52.369 19:07:29 -- common/autotest_common.sh@1360 -- # jq '.[] .block_size' 00:06:52.628 19:07:29 -- common/autotest_common.sh@1360 -- # bs=512 00:06:52.628 19:07:29 -- common/autotest_common.sh@1361 -- # jq '.[] .num_blocks' 00:06:52.628 19:07:29 -- common/autotest_common.sh@1361 -- # nb=1048576 00:06:52.628 19:07:29 -- common/autotest_common.sh@1364 -- # bdev_size=512 00:06:52.628 19:07:29 -- common/autotest_common.sh@1365 -- # echo 512 00:06:52.628 19:07:29 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:52.628 19:07:29 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:52.628 19:07:30 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:52.628 19:07:30 -- common/autotest_common.sh@1175 -- # local i=0 00:06:52.628 19:07:30 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:06:52.628 19:07:30 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:06:52.628 19:07:30 -- common/autotest_common.sh@1182 -- # sleep 2 00:06:55.159 19:07:32 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:06:55.159 19:07:32 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:06:55.159 19:07:32 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:06:55.159 19:07:32 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:06:55.159 19:07:32 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:06:55.159 19:07:32 -- common/autotest_common.sh@1185 -- # return 0 00:06:55.159 19:07:32 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:55.159 19:07:32 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:55.159 19:07:32 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:55.159 19:07:32 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:55.159 19:07:32 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:55.159 19:07:32 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:55.159 19:07:32 -- setup/common.sh@80 -- # echo 536870912 00:06:55.159 19:07:32 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:55.159 19:07:32 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:55.159 19:07:32 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:55.159 19:07:32 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:55.159 19:07:32 -- target/filesystem.sh@69 -- # partprobe 00:06:55.159 19:07:32 -- target/filesystem.sh@70 -- # sleep 1 00:06:56.095 19:07:33 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:56.095 19:07:33 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:56.095 19:07:33 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:06:56.095 19:07:33 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:56.095 19:07:33 -- common/autotest_common.sh@10 -- # set +x 00:06:56.095 ************************************ 00:06:56.095 START TEST filesystem_in_capsule_ext4 00:06:56.095 ************************************ 00:06:56.095 19:07:33 -- common/autotest_common.sh@1102 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:56.095 19:07:33 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:56.095 19:07:33 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:56.095 19:07:33 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:56.096 19:07:33 -- common/autotest_common.sh@900 -- # local fstype=ext4 00:06:56.096 19:07:33 -- common/autotest_common.sh@901 -- # local dev_name=/dev/nvme0n1p1 00:06:56.096 19:07:33 -- common/autotest_common.sh@902 -- # local i=0 00:06:56.096 19:07:33 -- common/autotest_common.sh@903 -- # local force 00:06:56.096 19:07:33 -- common/autotest_common.sh@905 -- # '[' ext4 = ext4 ']' 00:06:56.096 19:07:33 -- common/autotest_common.sh@906 -- # force=-F 00:06:56.096 19:07:33 -- common/autotest_common.sh@911 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:56.096 mke2fs 1.46.5 (30-Dec-2021) 00:06:56.096 Discarding device blocks: 0/522240 done 00:06:56.096 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:56.096 Filesystem UUID: 3b225192-95a4-4e97-9d6a-6363dc500356 00:06:56.096 Superblock backups stored on blocks: 00:06:56.096 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:56.096 00:06:56.096 Allocating group tables: 0/64 done 00:06:56.096 Writing inode tables: 0/64 done 00:06:56.096 Creating journal (8192 blocks): done 00:06:56.096 Writing superblocks and filesystem accounting information: 0/64 done 00:06:56.096 00:06:56.096 19:07:33 -- common/autotest_common.sh@919 -- # return 0 00:06:56.096 19:07:33 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:56.096 19:07:33 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:56.096 19:07:33 -- target/filesystem.sh@25 -- # sync 00:06:56.355 19:07:33 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:56.355 19:07:33 -- target/filesystem.sh@27 -- # sync 00:06:56.355 19:07:33 -- target/filesystem.sh@29 -- # i=0 00:06:56.355 19:07:33 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:56.355 19:07:33 -- target/filesystem.sh@37 -- # kill -0 61039 00:06:56.355 19:07:33 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:56.355 19:07:33 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:56.355 19:07:33 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:56.355 19:07:33 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:56.355 ************************************ 00:06:56.355 END TEST filesystem_in_capsule_ext4 00:06:56.355 ************************************ 00:06:56.355 00:06:56.355 real 0m0.353s 00:06:56.355 user 0m0.027s 00:06:56.355 sys 0m0.050s 00:06:56.355 19:07:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.355 19:07:33 -- common/autotest_common.sh@10 -- # set +x 00:06:56.355 19:07:33 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:56.355 19:07:33 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:06:56.355 19:07:33 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:56.355 19:07:33 -- common/autotest_common.sh@10 -- # set +x 00:06:56.355 ************************************ 00:06:56.355 START TEST filesystem_in_capsule_btrfs 00:06:56.355 ************************************ 00:06:56.355 19:07:33 -- common/autotest_common.sh@1102 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:56.355 19:07:33 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:56.355 19:07:33 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:56.355 19:07:33 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:56.355 19:07:33 -- common/autotest_common.sh@900 -- # local fstype=btrfs 00:06:56.355 19:07:33 -- common/autotest_common.sh@901 -- # local dev_name=/dev/nvme0n1p1 00:06:56.355 19:07:33 -- common/autotest_common.sh@902 -- # local i=0 00:06:56.355 19:07:33 -- common/autotest_common.sh@903 -- # local force 00:06:56.355 19:07:33 -- common/autotest_common.sh@905 -- # '[' btrfs = ext4 ']' 00:06:56.355 19:07:33 -- common/autotest_common.sh@908 -- # force=-f 00:06:56.355 19:07:33 -- common/autotest_common.sh@911 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:56.355 btrfs-progs v6.6.2 00:06:56.355 See https://btrfs.readthedocs.io for more information. 00:06:56.355 00:06:56.355 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:56.355 NOTE: several default settings have changed in version 5.15, please make sure 00:06:56.355 this does not affect your deployments: 00:06:56.355 - DUP for metadata (-m dup) 00:06:56.355 - enabled no-holes (-O no-holes) 00:06:56.355 - enabled free-space-tree (-R free-space-tree) 00:06:56.355 00:06:56.355 Label: (null) 00:06:56.355 UUID: f9427065-8e50-4544-a04a-d475666c1b16 00:06:56.355 Node size: 16384 00:06:56.355 Sector size: 4096 00:06:56.355 Filesystem size: 510.00MiB 00:06:56.355 Block group profiles: 00:06:56.355 Data: single 8.00MiB 00:06:56.355 Metadata: DUP 32.00MiB 00:06:56.355 System: DUP 8.00MiB 00:06:56.355 SSD detected: yes 00:06:56.355 Zoned device: no 00:06:56.355 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:56.355 Runtime features: free-space-tree 00:06:56.355 Checksum: crc32c 00:06:56.355 Number of devices: 1 00:06:56.355 Devices: 00:06:56.355 ID SIZE PATH 00:06:56.355 1 510.00MiB /dev/nvme0n1p1 00:06:56.355 00:06:56.355 19:07:33 -- common/autotest_common.sh@919 -- # return 0 00:06:56.355 19:07:33 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:56.613 19:07:33 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:56.613 19:07:33 -- target/filesystem.sh@25 -- # sync 00:06:56.613 19:07:33 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:56.613 19:07:33 -- target/filesystem.sh@27 -- # sync 00:06:56.613 19:07:33 -- target/filesystem.sh@29 -- # i=0 00:06:56.613 19:07:33 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:56.613 19:07:33 -- target/filesystem.sh@37 -- # kill -0 61039 00:06:56.613 19:07:33 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:56.613 19:07:33 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:56.613 19:07:33 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:56.613 19:07:33 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:56.613 ************************************ 00:06:56.613 END TEST filesystem_in_capsule_btrfs 00:06:56.613 ************************************ 00:06:56.613 00:06:56.613 real 0m0.220s 00:06:56.613 user 0m0.023s 00:06:56.613 sys 0m0.055s 00:06:56.613 19:07:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.613 19:07:33 -- common/autotest_common.sh@10 -- # set +x 00:06:56.613 19:07:33 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:56.613 19:07:33 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:06:56.613 19:07:33 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:56.613 19:07:33 -- common/autotest_common.sh@10 -- # set +x 00:06:56.614 ************************************ 00:06:56.614 START TEST filesystem_in_capsule_xfs 00:06:56.614 ************************************ 00:06:56.614 19:07:33 -- common/autotest_common.sh@1102 -- # nvmf_filesystem_create xfs nvme0n1 00:06:56.614 19:07:33 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:56.614 19:07:33 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:56.614 19:07:33 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:56.614 19:07:33 -- common/autotest_common.sh@900 -- # local fstype=xfs 00:06:56.614 19:07:33 -- common/autotest_common.sh@901 -- # local dev_name=/dev/nvme0n1p1 00:06:56.614 19:07:33 -- common/autotest_common.sh@902 -- # local i=0 00:06:56.614 19:07:33 -- common/autotest_common.sh@903 -- # local force 00:06:56.614 19:07:33 -- common/autotest_common.sh@905 -- # '[' xfs = ext4 ']' 00:06:56.614 19:07:33 -- common/autotest_common.sh@908 -- # force=-f 00:06:56.614 19:07:33 -- common/autotest_common.sh@911 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:56.614 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:56.614 = sectsz=512 attr=2, projid32bit=1 00:06:56.614 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:56.614 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:56.614 data = bsize=4096 blocks=130560, imaxpct=25 00:06:56.614 = sunit=0 swidth=0 blks 00:06:56.614 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:56.614 log =internal log bsize=4096 blocks=16384, version=2 00:06:56.614 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:56.614 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:57.548 Discarding blocks...Done. 00:06:57.548 19:07:34 -- common/autotest_common.sh@919 -- # return 0 00:06:57.548 19:07:34 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:59.449 19:07:36 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:59.449 19:07:36 -- target/filesystem.sh@25 -- # sync 00:06:59.449 19:07:36 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:59.449 19:07:36 -- target/filesystem.sh@27 -- # sync 00:06:59.449 19:07:36 -- target/filesystem.sh@29 -- # i=0 00:06:59.449 19:07:36 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:59.449 19:07:36 -- target/filesystem.sh@37 -- # kill -0 61039 00:06:59.449 19:07:36 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:59.449 19:07:36 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:59.449 19:07:36 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:59.449 19:07:36 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:59.449 ************************************ 00:06:59.449 END TEST filesystem_in_capsule_xfs 00:06:59.449 ************************************ 00:06:59.449 00:06:59.449 real 0m2.675s 00:06:59.449 user 0m0.019s 00:06:59.449 sys 0m0.059s 00:06:59.449 19:07:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.449 19:07:36 -- common/autotest_common.sh@10 -- # set +x 00:06:59.449 19:07:36 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:59.449 19:07:36 -- target/filesystem.sh@93 -- # sync 00:06:59.449 19:07:36 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:59.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:59.449 19:07:36 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:59.449 19:07:36 -- common/autotest_common.sh@1196 -- # local i=0 00:06:59.449 19:07:36 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:06:59.449 19:07:36 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:59.449 19:07:36 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:06:59.449 19:07:36 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:59.449 19:07:36 -- common/autotest_common.sh@1208 -- # return 0 00:06:59.449 19:07:36 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:59.449 19:07:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:59.449 19:07:36 -- common/autotest_common.sh@10 -- # set +x 00:06:59.449 19:07:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:59.449 19:07:36 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:59.449 19:07:36 -- target/filesystem.sh@101 -- # killprocess 61039 00:06:59.449 19:07:36 -- common/autotest_common.sh@924 -- # '[' -z 61039 ']' 00:06:59.449 19:07:36 -- common/autotest_common.sh@928 -- # kill -0 61039 00:06:59.449 19:07:36 -- common/autotest_common.sh@929 -- # uname 00:06:59.449 19:07:36 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:06:59.449 19:07:36 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 61039 00:06:59.449 19:07:36 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:06:59.449 19:07:36 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:06:59.449 killing process with pid 61039 00:06:59.449 19:07:36 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 61039' 00:06:59.449 19:07:36 -- common/autotest_common.sh@943 -- # kill 61039 00:06:59.449 19:07:36 -- common/autotest_common.sh@948 -- # wait 61039 00:07:00.015 19:07:37 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:00.015 00:07:00.015 real 0m8.878s 00:07:00.015 user 0m33.276s 00:07:00.015 sys 0m1.415s 00:07:00.015 19:07:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.015 19:07:37 -- common/autotest_common.sh@10 -- # set +x 00:07:00.015 ************************************ 00:07:00.015 END TEST nvmf_filesystem_in_capsule 00:07:00.015 ************************************ 00:07:00.015 19:07:37 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:00.015 19:07:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:00.015 19:07:37 -- nvmf/common.sh@116 -- # sync 00:07:00.015 19:07:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:00.015 19:07:37 -- nvmf/common.sh@119 -- # set +e 00:07:00.015 19:07:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:00.015 19:07:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:00.015 rmmod nvme_tcp 00:07:00.015 rmmod nvme_fabrics 00:07:00.015 rmmod nvme_keyring 00:07:00.274 19:07:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:00.274 19:07:37 -- nvmf/common.sh@123 -- # set -e 00:07:00.274 19:07:37 -- nvmf/common.sh@124 -- # return 0 00:07:00.274 19:07:37 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:07:00.274 19:07:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:00.274 19:07:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:00.274 19:07:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:00.274 19:07:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:00.274 19:07:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:00.274 19:07:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.274 19:07:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:00.274 19:07:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.274 19:07:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:00.274 00:07:00.274 real 0m19.113s 00:07:00.274 user 1m8.835s 00:07:00.274 sys 0m3.260s 00:07:00.274 19:07:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.274 19:07:37 -- common/autotest_common.sh@10 -- # set +x 00:07:00.274 ************************************ 00:07:00.274 END TEST nvmf_filesystem 00:07:00.274 ************************************ 00:07:00.274 19:07:37 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:00.274 19:07:37 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:07:00.275 19:07:37 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:00.275 19:07:37 -- common/autotest_common.sh@10 -- # set +x 00:07:00.275 ************************************ 00:07:00.275 START TEST nvmf_discovery 00:07:00.275 ************************************ 00:07:00.275 19:07:37 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:00.275 * Looking for test storage... 00:07:00.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:00.275 19:07:37 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:00.275 19:07:37 -- nvmf/common.sh@7 -- # uname -s 00:07:00.275 19:07:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:00.275 19:07:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:00.275 19:07:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:00.275 19:07:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:00.275 19:07:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:00.275 19:07:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:00.275 19:07:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:00.275 19:07:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:00.275 19:07:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:00.275 19:07:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:00.275 19:07:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:07:00.275 19:07:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:07:00.275 19:07:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:00.275 19:07:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:00.275 19:07:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:00.275 19:07:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:00.275 19:07:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.275 19:07:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.275 19:07:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.275 19:07:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.275 19:07:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.275 19:07:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.275 19:07:37 -- paths/export.sh@5 -- # export PATH 00:07:00.275 19:07:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.275 19:07:37 -- nvmf/common.sh@46 -- # : 0 00:07:00.275 19:07:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:00.275 19:07:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:00.275 19:07:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:00.275 19:07:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:00.275 19:07:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:00.275 19:07:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:00.275 19:07:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:00.275 19:07:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:00.275 19:07:37 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:00.275 19:07:37 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:00.275 19:07:37 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:00.275 19:07:37 -- target/discovery.sh@15 -- # hash nvme 00:07:00.275 19:07:37 -- target/discovery.sh@20 -- # nvmftestinit 00:07:00.275 19:07:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:00.275 19:07:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:00.275 19:07:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:00.275 19:07:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:00.275 19:07:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:00.275 19:07:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.275 19:07:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:00.275 19:07:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.275 19:07:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:00.275 19:07:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:00.275 19:07:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:00.275 19:07:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:00.275 19:07:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:00.275 19:07:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:00.275 19:07:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:00.275 19:07:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:00.275 19:07:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:00.275 19:07:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:00.275 19:07:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:00.275 19:07:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:00.275 19:07:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:00.275 19:07:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:00.275 19:07:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:00.275 19:07:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:00.275 19:07:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:00.275 19:07:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:00.275 19:07:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:00.275 19:07:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:00.275 Cannot find device "nvmf_tgt_br" 00:07:00.275 19:07:37 -- nvmf/common.sh@154 -- # true 00:07:00.275 19:07:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:00.275 Cannot find device "nvmf_tgt_br2" 00:07:00.275 19:07:37 -- nvmf/common.sh@155 -- # true 00:07:00.275 19:07:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:00.275 19:07:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:00.275 Cannot find device "nvmf_tgt_br" 00:07:00.275 19:07:37 -- nvmf/common.sh@157 -- # true 00:07:00.275 19:07:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:00.535 Cannot find device "nvmf_tgt_br2" 00:07:00.535 19:07:37 -- nvmf/common.sh@158 -- # true 00:07:00.535 19:07:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:00.535 19:07:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:00.535 19:07:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:00.535 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:00.535 19:07:37 -- nvmf/common.sh@161 -- # true 00:07:00.535 19:07:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:00.535 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:00.536 19:07:37 -- nvmf/common.sh@162 -- # true 00:07:00.536 19:07:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:00.536 19:07:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:00.536 19:07:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:00.536 19:07:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:00.536 19:07:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:00.536 19:07:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:00.536 19:07:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:00.536 19:07:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:00.536 19:07:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:00.536 19:07:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:00.536 19:07:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:00.536 19:07:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:00.536 19:07:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:00.536 19:07:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:00.536 19:07:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:00.536 19:07:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:00.536 19:07:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:00.536 19:07:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:00.536 19:07:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:00.536 19:07:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:00.536 19:07:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:00.536 19:07:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:00.810 19:07:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:00.810 19:07:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:00.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:00.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:07:00.810 00:07:00.810 --- 10.0.0.2 ping statistics --- 00:07:00.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.810 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:07:00.810 19:07:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:00.810 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:00.810 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:07:00.810 00:07:00.810 --- 10.0.0.3 ping statistics --- 00:07:00.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.810 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:07:00.810 19:07:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:00.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:00.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:07:00.810 00:07:00.810 --- 10.0.0.1 ping statistics --- 00:07:00.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.810 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:07:00.810 19:07:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:00.810 19:07:37 -- nvmf/common.sh@421 -- # return 0 00:07:00.810 19:07:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:00.810 19:07:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:00.810 19:07:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:00.810 19:07:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:00.810 19:07:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:00.810 19:07:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:00.810 19:07:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:00.810 19:07:37 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:00.810 19:07:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:00.810 19:07:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:00.810 19:07:37 -- common/autotest_common.sh@10 -- # set +x 00:07:00.810 19:07:38 -- nvmf/common.sh@469 -- # nvmfpid=61496 00:07:00.810 19:07:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:00.810 19:07:38 -- nvmf/common.sh@470 -- # waitforlisten 61496 00:07:00.810 19:07:38 -- common/autotest_common.sh@817 -- # '[' -z 61496 ']' 00:07:00.810 19:07:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.810 19:07:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:00.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.810 19:07:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.810 19:07:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:00.810 19:07:38 -- common/autotest_common.sh@10 -- # set +x 00:07:00.810 [2024-02-14 19:07:38.068480] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:00.810 [2024-02-14 19:07:38.068586] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:00.810 [2024-02-14 19:07:38.206473] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:01.109 [2024-02-14 19:07:38.351392] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:01.109 [2024-02-14 19:07:38.351546] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:01.109 [2024-02-14 19:07:38.351560] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:01.109 [2024-02-14 19:07:38.351569] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:01.109 [2024-02-14 19:07:38.351763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.109 [2024-02-14 19:07:38.352471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.109 [2024-02-14 19:07:38.352641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.109 [2024-02-14 19:07:38.352728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.675 19:07:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:01.675 19:07:39 -- common/autotest_common.sh@850 -- # return 0 00:07:01.675 19:07:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:01.675 19:07:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:01.675 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:01.675 19:07:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:01.675 19:07:39 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:01.675 19:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.675 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:01.675 [2024-02-14 19:07:39.064908] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:01.934 19:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.934 19:07:39 -- target/discovery.sh@26 -- # seq 1 4 00:07:01.934 19:07:39 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:01.934 19:07:39 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:01.934 19:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.934 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:01.934 Null1 00:07:01.934 19:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.934 19:07:39 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:01.934 19:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.934 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:01.934 19:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.934 19:07:39 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:01.934 19:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.934 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:01.934 19:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.934 19:07:39 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:01.934 19:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.934 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:01.934 [2024-02-14 19:07:39.142158] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:01.934 19:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.934 19:07:39 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:01.934 19:07:39 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:01.934 19:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.934 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:01.934 Null2 00:07:01.934 19:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.934 19:07:39 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:01.934 19:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.934 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:01.934 19:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.934 19:07:39 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:01.934 19:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.934 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:01.934 19:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.934 19:07:39 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:01.934 19:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.934 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:01.934 19:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.934 19:07:39 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:01.934 19:07:39 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:01.934 19:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.934 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:01.934 Null3 00:07:01.934 19:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.934 19:07:39 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:01.934 19:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.934 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:01.934 19:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.934 19:07:39 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:01.934 19:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.934 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:01.934 19:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.934 19:07:39 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:01.934 19:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.934 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:01.934 19:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.934 19:07:39 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:01.934 19:07:39 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:01.934 19:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.934 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:01.934 Null4 00:07:01.934 19:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.934 19:07:39 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:01.934 19:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.934 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:01.934 19:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.934 19:07:39 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:01.934 19:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.934 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:01.934 19:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.934 19:07:39 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:01.934 19:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.934 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:01.934 19:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.934 19:07:39 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:01.934 19:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.934 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:01.934 19:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.934 19:07:39 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:01.934 19:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.934 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:01.934 19:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.934 19:07:39 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -a 10.0.0.2 -s 4420 00:07:01.934 00:07:01.934 Discovery Log Number of Records 6, Generation counter 6 00:07:01.934 =====Discovery Log Entry 0====== 00:07:01.934 trtype: tcp 00:07:01.934 adrfam: ipv4 00:07:01.934 subtype: current discovery subsystem 00:07:01.934 treq: not required 00:07:01.934 portid: 0 00:07:01.934 trsvcid: 4420 00:07:01.934 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:01.934 traddr: 10.0.0.2 00:07:01.934 eflags: explicit discovery connections, duplicate discovery information 00:07:01.934 sectype: none 00:07:01.934 =====Discovery Log Entry 1====== 00:07:01.934 trtype: tcp 00:07:01.934 adrfam: ipv4 00:07:01.934 subtype: nvme subsystem 00:07:01.934 treq: not required 00:07:01.934 portid: 0 00:07:01.934 trsvcid: 4420 00:07:01.934 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:01.934 traddr: 10.0.0.2 00:07:01.934 eflags: none 00:07:01.934 sectype: none 00:07:01.934 =====Discovery Log Entry 2====== 00:07:01.934 trtype: tcp 00:07:01.934 adrfam: ipv4 00:07:01.934 subtype: nvme subsystem 00:07:01.934 treq: not required 00:07:01.934 portid: 0 00:07:01.934 trsvcid: 4420 00:07:01.934 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:01.934 traddr: 10.0.0.2 00:07:01.934 eflags: none 00:07:01.934 sectype: none 00:07:01.934 =====Discovery Log Entry 3====== 00:07:01.934 trtype: tcp 00:07:01.934 adrfam: ipv4 00:07:01.934 subtype: nvme subsystem 00:07:01.934 treq: not required 00:07:01.934 portid: 0 00:07:01.934 trsvcid: 4420 00:07:01.934 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:01.934 traddr: 10.0.0.2 00:07:01.934 eflags: none 00:07:01.934 sectype: none 00:07:01.934 =====Discovery Log Entry 4====== 00:07:01.934 trtype: tcp 00:07:01.934 adrfam: ipv4 00:07:01.934 subtype: nvme subsystem 00:07:01.934 treq: not required 00:07:01.934 portid: 0 00:07:01.934 trsvcid: 4420 00:07:01.934 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:01.934 traddr: 10.0.0.2 00:07:01.934 eflags: none 00:07:01.934 sectype: none 00:07:01.934 =====Discovery Log Entry 5====== 00:07:01.934 trtype: tcp 00:07:01.934 adrfam: ipv4 00:07:01.934 subtype: discovery subsystem referral 00:07:01.934 treq: not required 00:07:01.934 portid: 0 00:07:01.934 trsvcid: 4430 00:07:01.934 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:01.934 traddr: 10.0.0.2 00:07:01.934 eflags: none 00:07:01.934 sectype: none 00:07:01.934 Perform nvmf subsystem discovery via RPC 00:07:01.934 19:07:39 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:01.934 19:07:39 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:01.934 19:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.934 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:01.935 [2024-02-14 19:07:39.334265] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:01.935 [ 00:07:01.935 { 00:07:01.935 "allow_any_host": true, 00:07:01.935 "hosts": [], 00:07:01.935 "listen_addresses": [ 00:07:01.935 { 00:07:01.935 "adrfam": "IPv4", 00:07:01.935 "traddr": "10.0.0.2", 00:07:01.935 "transport": "TCP", 00:07:01.935 "trsvcid": "4420", 00:07:01.935 "trtype": "TCP" 00:07:01.935 } 00:07:01.935 ], 00:07:01.935 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:01.935 "subtype": "Discovery" 00:07:01.935 }, 00:07:01.935 { 00:07:01.935 "allow_any_host": true, 00:07:01.935 "hosts": [], 00:07:01.935 "listen_addresses": [ 00:07:01.935 { 00:07:01.935 "adrfam": "IPv4", 00:07:01.935 "traddr": "10.0.0.2", 00:07:01.935 "transport": "TCP", 00:07:01.935 "trsvcid": "4420", 00:07:01.935 "trtype": "TCP" 00:07:01.935 } 00:07:01.935 ], 00:07:01.935 "max_cntlid": 65519, 00:07:01.935 "max_namespaces": 32, 00:07:01.935 "min_cntlid": 1, 00:07:01.935 "model_number": "SPDK bdev Controller", 00:07:01.935 "namespaces": [ 00:07:01.935 { 00:07:01.935 "bdev_name": "Null1", 00:07:01.935 "name": "Null1", 00:07:02.193 "nguid": "3EF427F3C7654480A4D7443BC569CD7D", 00:07:02.193 "nsid": 1, 00:07:02.193 "uuid": "3ef427f3-c765-4480-a4d7-443bc569cd7d" 00:07:02.193 } 00:07:02.193 ], 00:07:02.193 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:02.193 "serial_number": "SPDK00000000000001", 00:07:02.193 "subtype": "NVMe" 00:07:02.193 }, 00:07:02.193 { 00:07:02.193 "allow_any_host": true, 00:07:02.193 "hosts": [], 00:07:02.193 "listen_addresses": [ 00:07:02.193 { 00:07:02.193 "adrfam": "IPv4", 00:07:02.193 "traddr": "10.0.0.2", 00:07:02.193 "transport": "TCP", 00:07:02.193 "trsvcid": "4420", 00:07:02.193 "trtype": "TCP" 00:07:02.193 } 00:07:02.193 ], 00:07:02.193 "max_cntlid": 65519, 00:07:02.193 "max_namespaces": 32, 00:07:02.193 "min_cntlid": 1, 00:07:02.193 "model_number": "SPDK bdev Controller", 00:07:02.193 "namespaces": [ 00:07:02.193 { 00:07:02.193 "bdev_name": "Null2", 00:07:02.193 "name": "Null2", 00:07:02.193 "nguid": "D90A7452F0614D78AECE79CC332A26C7", 00:07:02.193 "nsid": 1, 00:07:02.193 "uuid": "d90a7452-f061-4d78-aece-79cc332a26c7" 00:07:02.193 } 00:07:02.193 ], 00:07:02.193 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:02.193 "serial_number": "SPDK00000000000002", 00:07:02.193 "subtype": "NVMe" 00:07:02.193 }, 00:07:02.193 { 00:07:02.193 "allow_any_host": true, 00:07:02.193 "hosts": [], 00:07:02.193 "listen_addresses": [ 00:07:02.193 { 00:07:02.193 "adrfam": "IPv4", 00:07:02.193 "traddr": "10.0.0.2", 00:07:02.193 "transport": "TCP", 00:07:02.193 "trsvcid": "4420", 00:07:02.193 "trtype": "TCP" 00:07:02.193 } 00:07:02.193 ], 00:07:02.193 "max_cntlid": 65519, 00:07:02.193 "max_namespaces": 32, 00:07:02.193 "min_cntlid": 1, 00:07:02.193 "model_number": "SPDK bdev Controller", 00:07:02.193 "namespaces": [ 00:07:02.193 { 00:07:02.193 "bdev_name": "Null3", 00:07:02.193 "name": "Null3", 00:07:02.193 "nguid": "F3BFC6E6AA5B4054B684ED3FCB672DC4", 00:07:02.193 "nsid": 1, 00:07:02.193 "uuid": "f3bfc6e6-aa5b-4054-b684-ed3fcb672dc4" 00:07:02.193 } 00:07:02.193 ], 00:07:02.193 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:02.193 "serial_number": "SPDK00000000000003", 00:07:02.193 "subtype": "NVMe" 00:07:02.193 }, 00:07:02.194 { 00:07:02.194 "allow_any_host": true, 00:07:02.194 "hosts": [], 00:07:02.194 "listen_addresses": [ 00:07:02.194 { 00:07:02.194 "adrfam": "IPv4", 00:07:02.194 "traddr": "10.0.0.2", 00:07:02.194 "transport": "TCP", 00:07:02.194 "trsvcid": "4420", 00:07:02.194 "trtype": "TCP" 00:07:02.194 } 00:07:02.194 ], 00:07:02.194 "max_cntlid": 65519, 00:07:02.194 "max_namespaces": 32, 00:07:02.194 "min_cntlid": 1, 00:07:02.194 "model_number": "SPDK bdev Controller", 00:07:02.194 "namespaces": [ 00:07:02.194 { 00:07:02.194 "bdev_name": "Null4", 00:07:02.194 "name": "Null4", 00:07:02.194 "nguid": "1496586F195D4599B5859EA5B1F6DCCF", 00:07:02.194 "nsid": 1, 00:07:02.194 "uuid": "1496586f-195d-4599-b585-9ea5b1f6dccf" 00:07:02.194 } 00:07:02.194 ], 00:07:02.194 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:02.194 "serial_number": "SPDK00000000000004", 00:07:02.194 "subtype": "NVMe" 00:07:02.194 } 00:07:02.194 ] 00:07:02.194 19:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.194 19:07:39 -- target/discovery.sh@42 -- # seq 1 4 00:07:02.194 19:07:39 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:02.194 19:07:39 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:02.194 19:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.194 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:02.194 19:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.194 19:07:39 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:02.194 19:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.194 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:02.194 19:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.194 19:07:39 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:02.194 19:07:39 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:02.194 19:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.194 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:02.194 19:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.194 19:07:39 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:02.194 19:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.194 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:02.194 19:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.194 19:07:39 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:02.194 19:07:39 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:02.194 19:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.194 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:02.194 19:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.194 19:07:39 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:02.194 19:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.194 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:02.194 19:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.194 19:07:39 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:02.194 19:07:39 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:02.194 19:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.194 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:02.194 19:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.194 19:07:39 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:02.194 19:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.194 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:02.194 19:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.194 19:07:39 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:02.194 19:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.194 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:02.194 19:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.194 19:07:39 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:02.194 19:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.194 19:07:39 -- common/autotest_common.sh@10 -- # set +x 00:07:02.194 19:07:39 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:02.194 19:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.194 19:07:39 -- target/discovery.sh@49 -- # check_bdevs= 00:07:02.194 19:07:39 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:02.194 19:07:39 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:02.194 19:07:39 -- target/discovery.sh@57 -- # nvmftestfini 00:07:02.194 19:07:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:02.194 19:07:39 -- nvmf/common.sh@116 -- # sync 00:07:02.194 19:07:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:02.194 19:07:39 -- nvmf/common.sh@119 -- # set +e 00:07:02.194 19:07:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:02.194 19:07:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:02.194 rmmod nvme_tcp 00:07:02.194 rmmod nvme_fabrics 00:07:02.194 rmmod nvme_keyring 00:07:02.194 19:07:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:02.453 19:07:39 -- nvmf/common.sh@123 -- # set -e 00:07:02.453 19:07:39 -- nvmf/common.sh@124 -- # return 0 00:07:02.453 19:07:39 -- nvmf/common.sh@477 -- # '[' -n 61496 ']' 00:07:02.453 19:07:39 -- nvmf/common.sh@478 -- # killprocess 61496 00:07:02.453 19:07:39 -- common/autotest_common.sh@924 -- # '[' -z 61496 ']' 00:07:02.453 19:07:39 -- common/autotest_common.sh@928 -- # kill -0 61496 00:07:02.453 19:07:39 -- common/autotest_common.sh@929 -- # uname 00:07:02.453 19:07:39 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:07:02.453 19:07:39 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 61496 00:07:02.453 killing process with pid 61496 00:07:02.453 19:07:39 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:07:02.453 19:07:39 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:07:02.453 19:07:39 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 61496' 00:07:02.453 19:07:39 -- common/autotest_common.sh@943 -- # kill 61496 00:07:02.453 [2024-02-14 19:07:39.643386] app.c: 881:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:02.453 19:07:39 -- common/autotest_common.sh@948 -- # wait 61496 00:07:02.716 19:07:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:02.716 19:07:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:02.716 19:07:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:02.716 19:07:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:02.716 19:07:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:02.716 19:07:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.716 19:07:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:02.716 19:07:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.716 19:07:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:02.716 00:07:02.716 real 0m2.506s 00:07:02.716 user 0m6.607s 00:07:02.716 sys 0m0.676s 00:07:02.716 19:07:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.716 19:07:40 -- common/autotest_common.sh@10 -- # set +x 00:07:02.716 ************************************ 00:07:02.716 END TEST nvmf_discovery 00:07:02.716 ************************************ 00:07:02.716 19:07:40 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:02.716 19:07:40 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:07:02.716 19:07:40 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:02.716 19:07:40 -- common/autotest_common.sh@10 -- # set +x 00:07:02.716 ************************************ 00:07:02.716 START TEST nvmf_referrals 00:07:02.716 ************************************ 00:07:02.717 19:07:40 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:02.980 * Looking for test storage... 00:07:02.980 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:02.980 19:07:40 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:02.980 19:07:40 -- nvmf/common.sh@7 -- # uname -s 00:07:02.980 19:07:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.980 19:07:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.980 19:07:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.980 19:07:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.980 19:07:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.980 19:07:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.980 19:07:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.980 19:07:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.980 19:07:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.980 19:07:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.980 19:07:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:07:02.980 19:07:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:07:02.980 19:07:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.980 19:07:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.980 19:07:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:02.980 19:07:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:02.980 19:07:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.980 19:07:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.980 19:07:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.980 19:07:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.981 19:07:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.981 19:07:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.981 19:07:40 -- paths/export.sh@5 -- # export PATH 00:07:02.981 19:07:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.981 19:07:40 -- nvmf/common.sh@46 -- # : 0 00:07:02.981 19:07:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:02.981 19:07:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:02.981 19:07:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:02.981 19:07:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.981 19:07:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.981 19:07:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:02.981 19:07:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:02.981 19:07:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:02.981 19:07:40 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:02.981 19:07:40 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:02.981 19:07:40 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:02.981 19:07:40 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:02.981 19:07:40 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:02.981 19:07:40 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:02.981 19:07:40 -- target/referrals.sh@37 -- # nvmftestinit 00:07:02.981 19:07:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:02.981 19:07:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:02.981 19:07:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:02.981 19:07:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:02.981 19:07:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:02.981 19:07:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.981 19:07:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:02.981 19:07:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.981 19:07:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:02.981 19:07:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:02.981 19:07:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:02.981 19:07:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:02.981 19:07:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:02.981 19:07:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:02.981 19:07:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:02.981 19:07:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:02.981 19:07:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:02.981 19:07:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:02.981 19:07:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:02.981 19:07:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:02.981 19:07:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:02.981 19:07:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:02.981 19:07:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:02.981 19:07:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:02.981 19:07:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:02.981 19:07:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:02.981 19:07:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:02.981 19:07:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:02.981 Cannot find device "nvmf_tgt_br" 00:07:02.981 19:07:40 -- nvmf/common.sh@154 -- # true 00:07:02.981 19:07:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:02.981 Cannot find device "nvmf_tgt_br2" 00:07:02.981 19:07:40 -- nvmf/common.sh@155 -- # true 00:07:02.981 19:07:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:02.981 19:07:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:02.981 Cannot find device "nvmf_tgt_br" 00:07:02.981 19:07:40 -- nvmf/common.sh@157 -- # true 00:07:02.981 19:07:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:02.981 Cannot find device "nvmf_tgt_br2" 00:07:02.981 19:07:40 -- nvmf/common.sh@158 -- # true 00:07:02.981 19:07:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:02.981 19:07:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:02.981 19:07:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:02.981 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:02.981 19:07:40 -- nvmf/common.sh@161 -- # true 00:07:02.981 19:07:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:02.981 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:02.981 19:07:40 -- nvmf/common.sh@162 -- # true 00:07:02.981 19:07:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:02.981 19:07:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:02.981 19:07:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:02.981 19:07:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:03.241 19:07:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:03.241 19:07:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:03.241 19:07:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:03.241 19:07:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:03.241 19:07:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:03.241 19:07:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:03.241 19:07:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:03.241 19:07:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:03.241 19:07:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:03.241 19:07:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:03.241 19:07:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:03.241 19:07:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:03.241 19:07:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:03.241 19:07:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:03.241 19:07:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:03.241 19:07:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:03.241 19:07:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:03.241 19:07:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:03.241 19:07:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:03.241 19:07:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:03.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:03.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:07:03.241 00:07:03.241 --- 10.0.0.2 ping statistics --- 00:07:03.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.241 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:07:03.241 19:07:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:03.241 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:03.241 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.116 ms 00:07:03.241 00:07:03.241 --- 10.0.0.3 ping statistics --- 00:07:03.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.241 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:07:03.241 19:07:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:03.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:03.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:07:03.241 00:07:03.241 --- 10.0.0.1 ping statistics --- 00:07:03.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.241 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:07:03.241 19:07:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:03.241 19:07:40 -- nvmf/common.sh@421 -- # return 0 00:07:03.241 19:07:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:03.241 19:07:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:03.241 19:07:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:03.241 19:07:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:03.241 19:07:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:03.241 19:07:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:03.241 19:07:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:03.241 19:07:40 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:03.241 19:07:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:03.241 19:07:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:03.241 19:07:40 -- common/autotest_common.sh@10 -- # set +x 00:07:03.241 19:07:40 -- nvmf/common.sh@469 -- # nvmfpid=61725 00:07:03.241 19:07:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:03.241 19:07:40 -- nvmf/common.sh@470 -- # waitforlisten 61725 00:07:03.241 19:07:40 -- common/autotest_common.sh@817 -- # '[' -z 61725 ']' 00:07:03.241 19:07:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.241 19:07:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:03.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.241 19:07:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.241 19:07:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:03.241 19:07:40 -- common/autotest_common.sh@10 -- # set +x 00:07:03.500 [2024-02-14 19:07:40.659780] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:03.500 [2024-02-14 19:07:40.659925] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:03.500 [2024-02-14 19:07:40.801309] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:03.500 [2024-02-14 19:07:40.916957] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:03.500 [2024-02-14 19:07:40.917387] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:03.500 [2024-02-14 19:07:40.917536] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:03.500 [2024-02-14 19:07:40.917665] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:03.760 [2024-02-14 19:07:40.917935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.760 [2024-02-14 19:07:40.918001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.760 [2024-02-14 19:07:40.918059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.760 [2024-02-14 19:07:40.918075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.329 19:07:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:04.329 19:07:41 -- common/autotest_common.sh@850 -- # return 0 00:07:04.329 19:07:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:04.329 19:07:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:04.329 19:07:41 -- common/autotest_common.sh@10 -- # set +x 00:07:04.329 19:07:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:04.329 19:07:41 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:04.329 19:07:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:04.329 19:07:41 -- common/autotest_common.sh@10 -- # set +x 00:07:04.329 [2024-02-14 19:07:41.722035] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:04.329 19:07:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:04.329 19:07:41 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:04.329 19:07:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:04.329 19:07:41 -- common/autotest_common.sh@10 -- # set +x 00:07:04.588 [2024-02-14 19:07:41.750422] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:04.588 19:07:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:04.588 19:07:41 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:04.588 19:07:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:04.588 19:07:41 -- common/autotest_common.sh@10 -- # set +x 00:07:04.588 19:07:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:04.588 19:07:41 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:04.588 19:07:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:04.588 19:07:41 -- common/autotest_common.sh@10 -- # set +x 00:07:04.588 19:07:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:04.588 19:07:41 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:04.588 19:07:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:04.588 19:07:41 -- common/autotest_common.sh@10 -- # set +x 00:07:04.588 19:07:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:04.588 19:07:41 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:04.588 19:07:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:04.588 19:07:41 -- common/autotest_common.sh@10 -- # set +x 00:07:04.588 19:07:41 -- target/referrals.sh@48 -- # jq length 00:07:04.588 19:07:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:04.588 19:07:41 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:04.588 19:07:41 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:04.588 19:07:41 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:04.588 19:07:41 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:04.588 19:07:41 -- target/referrals.sh@21 -- # sort 00:07:04.588 19:07:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:04.588 19:07:41 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:04.588 19:07:41 -- common/autotest_common.sh@10 -- # set +x 00:07:04.588 19:07:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:04.588 19:07:41 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:04.588 19:07:41 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:04.588 19:07:41 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:04.588 19:07:41 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:04.588 19:07:41 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:04.588 19:07:41 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:04.588 19:07:41 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:04.588 19:07:41 -- target/referrals.sh@26 -- # sort 00:07:04.847 19:07:42 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:04.847 19:07:42 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:04.847 19:07:42 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:04.847 19:07:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:04.847 19:07:42 -- common/autotest_common.sh@10 -- # set +x 00:07:04.847 19:07:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:04.847 19:07:42 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:04.847 19:07:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:04.847 19:07:42 -- common/autotest_common.sh@10 -- # set +x 00:07:04.847 19:07:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:04.847 19:07:42 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:04.847 19:07:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:04.847 19:07:42 -- common/autotest_common.sh@10 -- # set +x 00:07:04.847 19:07:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:04.847 19:07:42 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:04.847 19:07:42 -- target/referrals.sh@56 -- # jq length 00:07:04.847 19:07:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:04.847 19:07:42 -- common/autotest_common.sh@10 -- # set +x 00:07:04.847 19:07:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:04.847 19:07:42 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:04.847 19:07:42 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:04.847 19:07:42 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:04.847 19:07:42 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:04.847 19:07:42 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:04.847 19:07:42 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:04.847 19:07:42 -- target/referrals.sh@26 -- # sort 00:07:04.847 19:07:42 -- target/referrals.sh@26 -- # echo 00:07:04.847 19:07:42 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:04.847 19:07:42 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:04.847 19:07:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:04.847 19:07:42 -- common/autotest_common.sh@10 -- # set +x 00:07:04.847 19:07:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:04.847 19:07:42 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:04.847 19:07:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:04.847 19:07:42 -- common/autotest_common.sh@10 -- # set +x 00:07:04.847 19:07:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:04.847 19:07:42 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:04.847 19:07:42 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:04.847 19:07:42 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:04.847 19:07:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:04.847 19:07:42 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:04.847 19:07:42 -- common/autotest_common.sh@10 -- # set +x 00:07:04.847 19:07:42 -- target/referrals.sh@21 -- # sort 00:07:04.847 19:07:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:04.847 19:07:42 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:04.847 19:07:42 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:04.847 19:07:42 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:04.847 19:07:42 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:04.847 19:07:42 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:04.847 19:07:42 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:04.847 19:07:42 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:04.847 19:07:42 -- target/referrals.sh@26 -- # sort 00:07:05.107 19:07:42 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:05.107 19:07:42 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:05.107 19:07:42 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:05.107 19:07:42 -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:05.107 19:07:42 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:05.107 19:07:42 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:05.107 19:07:42 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:05.107 19:07:42 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:05.107 19:07:42 -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:05.107 19:07:42 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:05.107 19:07:42 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:05.107 19:07:42 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:05.107 19:07:42 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:05.107 19:07:42 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:05.107 19:07:42 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:05.107 19:07:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:05.107 19:07:42 -- common/autotest_common.sh@10 -- # set +x 00:07:05.107 19:07:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:05.107 19:07:42 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:05.107 19:07:42 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:05.107 19:07:42 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:05.107 19:07:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:05.107 19:07:42 -- common/autotest_common.sh@10 -- # set +x 00:07:05.107 19:07:42 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:05.107 19:07:42 -- target/referrals.sh@21 -- # sort 00:07:05.107 19:07:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:05.366 19:07:42 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:05.366 19:07:42 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:05.366 19:07:42 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:05.366 19:07:42 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:05.366 19:07:42 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:05.366 19:07:42 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:05.366 19:07:42 -- target/referrals.sh@26 -- # sort 00:07:05.366 19:07:42 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:05.366 19:07:42 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:05.366 19:07:42 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:05.366 19:07:42 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:05.366 19:07:42 -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:05.366 19:07:42 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:05.366 19:07:42 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:05.366 19:07:42 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:05.366 19:07:42 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:05.366 19:07:42 -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:05.366 19:07:42 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:05.366 19:07:42 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:05.366 19:07:42 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:05.366 19:07:42 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:05.366 19:07:42 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:05.367 19:07:42 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:05.367 19:07:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:05.367 19:07:42 -- common/autotest_common.sh@10 -- # set +x 00:07:05.367 19:07:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:05.367 19:07:42 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:05.367 19:07:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:05.367 19:07:42 -- common/autotest_common.sh@10 -- # set +x 00:07:05.367 19:07:42 -- target/referrals.sh@82 -- # jq length 00:07:05.367 19:07:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:05.626 19:07:42 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:05.626 19:07:42 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:05.626 19:07:42 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:05.626 19:07:42 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:05.626 19:07:42 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:05.626 19:07:42 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:05.626 19:07:42 -- target/referrals.sh@26 -- # sort 00:07:05.626 19:07:42 -- target/referrals.sh@26 -- # echo 00:07:05.626 19:07:42 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:05.626 19:07:42 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:05.626 19:07:42 -- target/referrals.sh@86 -- # nvmftestfini 00:07:05.626 19:07:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:05.626 19:07:42 -- nvmf/common.sh@116 -- # sync 00:07:05.626 19:07:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:05.626 19:07:42 -- nvmf/common.sh@119 -- # set +e 00:07:05.626 19:07:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:05.626 19:07:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:05.626 rmmod nvme_tcp 00:07:05.626 rmmod nvme_fabrics 00:07:05.626 rmmod nvme_keyring 00:07:05.626 19:07:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:05.626 19:07:42 -- nvmf/common.sh@123 -- # set -e 00:07:05.626 19:07:42 -- nvmf/common.sh@124 -- # return 0 00:07:05.626 19:07:42 -- nvmf/common.sh@477 -- # '[' -n 61725 ']' 00:07:05.626 19:07:42 -- nvmf/common.sh@478 -- # killprocess 61725 00:07:05.626 19:07:42 -- common/autotest_common.sh@924 -- # '[' -z 61725 ']' 00:07:05.626 19:07:42 -- common/autotest_common.sh@928 -- # kill -0 61725 00:07:05.626 19:07:42 -- common/autotest_common.sh@929 -- # uname 00:07:05.626 19:07:42 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:07:05.626 19:07:42 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 61725 00:07:05.626 19:07:43 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:07:05.626 19:07:43 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:07:05.626 killing process with pid 61725 00:07:05.626 19:07:43 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 61725' 00:07:05.626 19:07:43 -- common/autotest_common.sh@943 -- # kill 61725 00:07:05.626 19:07:43 -- common/autotest_common.sh@948 -- # wait 61725 00:07:06.195 19:07:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:06.195 19:07:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:06.195 19:07:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:06.195 19:07:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:06.195 19:07:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:06.195 19:07:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.195 19:07:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:06.195 19:07:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.195 19:07:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:06.195 00:07:06.195 real 0m3.325s 00:07:06.195 user 0m10.487s 00:07:06.195 sys 0m0.938s 00:07:06.195 19:07:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.195 19:07:43 -- common/autotest_common.sh@10 -- # set +x 00:07:06.195 ************************************ 00:07:06.195 END TEST nvmf_referrals 00:07:06.195 ************************************ 00:07:06.195 19:07:43 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:06.195 19:07:43 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:07:06.195 19:07:43 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:06.195 19:07:43 -- common/autotest_common.sh@10 -- # set +x 00:07:06.195 ************************************ 00:07:06.195 START TEST nvmf_connect_disconnect 00:07:06.195 ************************************ 00:07:06.195 19:07:43 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:06.195 * Looking for test storage... 00:07:06.195 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:06.195 19:07:43 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:06.195 19:07:43 -- nvmf/common.sh@7 -- # uname -s 00:07:06.195 19:07:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.195 19:07:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.195 19:07:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.195 19:07:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.195 19:07:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.195 19:07:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.195 19:07:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.195 19:07:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.195 19:07:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.195 19:07:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.195 19:07:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:07:06.195 19:07:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:07:06.195 19:07:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.195 19:07:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.195 19:07:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:06.195 19:07:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:06.195 19:07:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.195 19:07:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.195 19:07:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.196 19:07:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.196 19:07:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.196 19:07:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.196 19:07:43 -- paths/export.sh@5 -- # export PATH 00:07:06.196 19:07:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.196 19:07:43 -- nvmf/common.sh@46 -- # : 0 00:07:06.196 19:07:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:06.196 19:07:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:06.196 19:07:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:06.196 19:07:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.196 19:07:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.196 19:07:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:06.196 19:07:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:06.196 19:07:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:06.196 19:07:43 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:06.196 19:07:43 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:06.196 19:07:43 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:06.196 19:07:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:06.196 19:07:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:06.196 19:07:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:06.196 19:07:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:06.196 19:07:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:06.196 19:07:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.196 19:07:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:06.196 19:07:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.196 19:07:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:06.196 19:07:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:06.196 19:07:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:06.196 19:07:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:06.196 19:07:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:06.196 19:07:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:06.196 19:07:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:06.196 19:07:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:06.196 19:07:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:06.196 19:07:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:06.196 19:07:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:06.196 19:07:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:06.196 19:07:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:06.196 19:07:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:06.196 19:07:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:06.196 19:07:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:06.196 19:07:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:06.196 19:07:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:06.196 19:07:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:06.455 19:07:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:06.455 Cannot find device "nvmf_tgt_br" 00:07:06.455 19:07:43 -- nvmf/common.sh@154 -- # true 00:07:06.455 19:07:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:06.455 Cannot find device "nvmf_tgt_br2" 00:07:06.455 19:07:43 -- nvmf/common.sh@155 -- # true 00:07:06.455 19:07:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:06.455 19:07:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:06.455 Cannot find device "nvmf_tgt_br" 00:07:06.455 19:07:43 -- nvmf/common.sh@157 -- # true 00:07:06.455 19:07:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:06.455 Cannot find device "nvmf_tgt_br2" 00:07:06.455 19:07:43 -- nvmf/common.sh@158 -- # true 00:07:06.455 19:07:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:06.455 19:07:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:06.455 19:07:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:06.455 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:06.455 19:07:43 -- nvmf/common.sh@161 -- # true 00:07:06.455 19:07:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:06.455 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:06.455 19:07:43 -- nvmf/common.sh@162 -- # true 00:07:06.455 19:07:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:06.455 19:07:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:06.455 19:07:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:06.455 19:07:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:06.455 19:07:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:06.455 19:07:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:06.455 19:07:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:06.455 19:07:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:06.455 19:07:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:06.455 19:07:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:06.455 19:07:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:06.455 19:07:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:06.455 19:07:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:06.455 19:07:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:06.455 19:07:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:06.715 19:07:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:06.715 19:07:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:06.715 19:07:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:06.715 19:07:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:06.715 19:07:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:06.715 19:07:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:06.715 19:07:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:06.715 19:07:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:06.715 19:07:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:06.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:06.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:07:06.715 00:07:06.715 --- 10.0.0.2 ping statistics --- 00:07:06.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.715 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:07:06.715 19:07:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:06.715 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:06.715 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:07:06.715 00:07:06.715 --- 10.0.0.3 ping statistics --- 00:07:06.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.715 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:07:06.715 19:07:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:06.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:06.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:07:06.715 00:07:06.715 --- 10.0.0.1 ping statistics --- 00:07:06.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.715 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:07:06.715 19:07:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:06.715 19:07:43 -- nvmf/common.sh@421 -- # return 0 00:07:06.715 19:07:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:06.715 19:07:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:06.715 19:07:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:06.715 19:07:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:06.715 19:07:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:06.715 19:07:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:06.715 19:07:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:06.715 19:07:43 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:06.715 19:07:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:06.715 19:07:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:06.715 19:07:43 -- common/autotest_common.sh@10 -- # set +x 00:07:06.715 19:07:43 -- nvmf/common.sh@469 -- # nvmfpid=62037 00:07:06.715 19:07:43 -- nvmf/common.sh@470 -- # waitforlisten 62037 00:07:06.715 19:07:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:06.715 19:07:43 -- common/autotest_common.sh@817 -- # '[' -z 62037 ']' 00:07:06.715 19:07:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.715 19:07:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:06.715 19:07:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.715 19:07:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:06.715 19:07:43 -- common/autotest_common.sh@10 -- # set +x 00:07:06.715 [2024-02-14 19:07:44.034504] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:06.715 [2024-02-14 19:07:44.034581] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.975 [2024-02-14 19:07:44.170036] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:06.975 [2024-02-14 19:07:44.307349] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:06.975 [2024-02-14 19:07:44.307776] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:06.975 [2024-02-14 19:07:44.307924] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:06.975 [2024-02-14 19:07:44.308011] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:06.975 [2024-02-14 19:07:44.308265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.975 [2024-02-14 19:07:44.308999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.975 [2024-02-14 19:07:44.309141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:06.975 [2024-02-14 19:07:44.309250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.912 19:07:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:07.912 19:07:45 -- common/autotest_common.sh@850 -- # return 0 00:07:07.912 19:07:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:07.912 19:07:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:07.912 19:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:07.912 19:07:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:07.912 19:07:45 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:07.912 19:07:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:07.912 19:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:07.912 [2024-02-14 19:07:45.108180] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:07.912 19:07:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:07.912 19:07:45 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:07.912 19:07:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:07.912 19:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:07.912 19:07:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:07.912 19:07:45 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:07.912 19:07:45 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:07.912 19:07:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:07.912 19:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:07.912 19:07:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:07.912 19:07:45 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:07.912 19:07:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:07.912 19:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:07.912 19:07:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:07.912 19:07:45 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:07.912 19:07:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:07.912 19:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:07.912 [2024-02-14 19:07:45.187761] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:07.912 19:07:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:07.912 19:07:45 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:07:07.912 19:07:45 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:07:07.912 19:07:45 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:07:07.912 19:07:45 -- target/connect_disconnect.sh@34 -- # set +x 00:07:10.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:12.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:14.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:17.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:19.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:21.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:23.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:26.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:28.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:30.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:33.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:35.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:37.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:39.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:42.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:44.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:46.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:49.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:51.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:53.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:55.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:58.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:00.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:04.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:07.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:11.443 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:13.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:20.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:26.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.247 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.654 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.389 19:11:30 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:53.389 19:11:30 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:53.389 19:11:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:53.389 19:11:30 -- nvmf/common.sh@116 -- # sync 00:10:53.389 19:11:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:53.389 19:11:30 -- nvmf/common.sh@119 -- # set +e 00:10:53.389 19:11:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:53.389 19:11:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:53.389 rmmod nvme_tcp 00:10:53.389 rmmod nvme_fabrics 00:10:53.389 rmmod nvme_keyring 00:10:53.389 19:11:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:53.389 19:11:30 -- nvmf/common.sh@123 -- # set -e 00:10:53.389 19:11:30 -- nvmf/common.sh@124 -- # return 0 00:10:53.389 19:11:30 -- nvmf/common.sh@477 -- # '[' -n 62037 ']' 00:10:53.389 19:11:30 -- nvmf/common.sh@478 -- # killprocess 62037 00:10:53.389 19:11:30 -- common/autotest_common.sh@924 -- # '[' -z 62037 ']' 00:10:53.389 19:11:30 -- common/autotest_common.sh@928 -- # kill -0 62037 00:10:53.389 19:11:30 -- common/autotest_common.sh@929 -- # uname 00:10:53.389 19:11:30 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:10:53.389 19:11:30 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 62037 00:10:53.389 19:11:30 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:10:53.389 19:11:30 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:10:53.389 19:11:30 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 62037' 00:10:53.389 killing process with pid 62037 00:10:53.389 19:11:30 -- common/autotest_common.sh@943 -- # kill 62037 00:10:53.389 19:11:30 -- common/autotest_common.sh@948 -- # wait 62037 00:10:53.647 19:11:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:53.647 19:11:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:53.647 19:11:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:53.647 19:11:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:53.647 19:11:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:53.647 19:11:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.647 19:11:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:53.647 19:11:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.647 19:11:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:53.647 00:10:53.647 real 3m47.522s 00:10:53.647 user 14m44.312s 00:10:53.647 sys 0m23.814s 00:10:53.647 19:11:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:53.647 ************************************ 00:10:53.647 19:11:30 -- common/autotest_common.sh@10 -- # set +x 00:10:53.647 END TEST nvmf_connect_disconnect 00:10:53.647 ************************************ 00:10:53.647 19:11:31 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:53.648 19:11:31 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:10:53.648 19:11:31 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:53.648 19:11:31 -- common/autotest_common.sh@10 -- # set +x 00:10:53.648 ************************************ 00:10:53.648 START TEST nvmf_multitarget 00:10:53.648 ************************************ 00:10:53.648 19:11:31 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:53.906 * Looking for test storage... 00:10:53.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:53.907 19:11:31 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:53.907 19:11:31 -- nvmf/common.sh@7 -- # uname -s 00:10:53.907 19:11:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.907 19:11:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.907 19:11:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.907 19:11:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.907 19:11:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.907 19:11:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.907 19:11:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.907 19:11:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.907 19:11:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.907 19:11:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.907 19:11:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:10:53.907 19:11:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:10:53.907 19:11:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.907 19:11:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.907 19:11:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:53.907 19:11:31 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:53.907 19:11:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.907 19:11:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.907 19:11:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.907 19:11:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.907 19:11:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.907 19:11:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.907 19:11:31 -- paths/export.sh@5 -- # export PATH 00:10:53.907 19:11:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.907 19:11:31 -- nvmf/common.sh@46 -- # : 0 00:10:53.907 19:11:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:53.907 19:11:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:53.907 19:11:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:53.907 19:11:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.907 19:11:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.907 19:11:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:53.907 19:11:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:53.907 19:11:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:53.907 19:11:31 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:10:53.907 19:11:31 -- target/multitarget.sh@15 -- # nvmftestinit 00:10:53.907 19:11:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:53.907 19:11:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:53.907 19:11:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:53.907 19:11:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:53.907 19:11:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:53.907 19:11:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.907 19:11:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:53.907 19:11:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.907 19:11:31 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:53.907 19:11:31 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:53.907 19:11:31 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:53.907 19:11:31 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:53.907 19:11:31 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:53.907 19:11:31 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:53.907 19:11:31 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:53.907 19:11:31 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:53.907 19:11:31 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:53.907 19:11:31 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:53.907 19:11:31 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:53.907 19:11:31 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:53.907 19:11:31 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:53.907 19:11:31 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:53.907 19:11:31 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:53.907 19:11:31 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:53.907 19:11:31 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:53.907 19:11:31 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:53.907 19:11:31 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:53.907 19:11:31 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:53.907 Cannot find device "nvmf_tgt_br" 00:10:53.907 19:11:31 -- nvmf/common.sh@154 -- # true 00:10:53.907 19:11:31 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:53.907 Cannot find device "nvmf_tgt_br2" 00:10:53.907 19:11:31 -- nvmf/common.sh@155 -- # true 00:10:53.907 19:11:31 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:53.907 19:11:31 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:53.907 Cannot find device "nvmf_tgt_br" 00:10:53.907 19:11:31 -- nvmf/common.sh@157 -- # true 00:10:53.907 19:11:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:53.907 Cannot find device "nvmf_tgt_br2" 00:10:53.907 19:11:31 -- nvmf/common.sh@158 -- # true 00:10:53.907 19:11:31 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:53.907 19:11:31 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:53.907 19:11:31 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:53.907 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:53.907 19:11:31 -- nvmf/common.sh@161 -- # true 00:10:53.907 19:11:31 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:53.907 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:53.907 19:11:31 -- nvmf/common.sh@162 -- # true 00:10:53.907 19:11:31 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:53.907 19:11:31 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:53.907 19:11:31 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:54.166 19:11:31 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:54.166 19:11:31 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:54.166 19:11:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:54.166 19:11:31 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:54.166 19:11:31 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:54.166 19:11:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:54.166 19:11:31 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:54.166 19:11:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:54.166 19:11:31 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:54.166 19:11:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:54.166 19:11:31 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:54.166 19:11:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:54.166 19:11:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:54.166 19:11:31 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:54.166 19:11:31 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:54.166 19:11:31 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:54.166 19:11:31 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:54.166 19:11:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:54.166 19:11:31 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:54.166 19:11:31 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:54.166 19:11:31 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:54.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:54.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:10:54.166 00:10:54.167 --- 10.0.0.2 ping statistics --- 00:10:54.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.167 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:10:54.167 19:11:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:54.167 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:54.167 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:10:54.167 00:10:54.167 --- 10.0.0.3 ping statistics --- 00:10:54.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.167 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:10:54.167 19:11:31 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:54.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:54.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:10:54.167 00:10:54.167 --- 10.0.0.1 ping statistics --- 00:10:54.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.167 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:54.167 19:11:31 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:54.167 19:11:31 -- nvmf/common.sh@421 -- # return 0 00:10:54.167 19:11:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:54.167 19:11:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:54.167 19:11:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:54.167 19:11:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:54.167 19:11:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:54.167 19:11:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:54.167 19:11:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:54.167 19:11:31 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:54.167 19:11:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:54.167 19:11:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:54.167 19:11:31 -- common/autotest_common.sh@10 -- # set +x 00:10:54.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.167 19:11:31 -- nvmf/common.sh@469 -- # nvmfpid=65817 00:10:54.167 19:11:31 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:54.167 19:11:31 -- nvmf/common.sh@470 -- # waitforlisten 65817 00:10:54.167 19:11:31 -- common/autotest_common.sh@817 -- # '[' -z 65817 ']' 00:10:54.167 19:11:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.167 19:11:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:54.167 19:11:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.167 19:11:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:54.167 19:11:31 -- common/autotest_common.sh@10 -- # set +x 00:10:54.167 [2024-02-14 19:11:31.577310] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:10:54.167 [2024-02-14 19:11:31.577704] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:54.425 [2024-02-14 19:11:31.713741] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:54.684 [2024-02-14 19:11:31.866877] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:54.684 [2024-02-14 19:11:31.867348] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:54.684 [2024-02-14 19:11:31.867500] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:54.684 [2024-02-14 19:11:31.867613] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:54.684 [2024-02-14 19:11:31.868017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.684 [2024-02-14 19:11:31.868109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.684 [2024-02-14 19:11:31.868274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:54.684 [2024-02-14 19:11:31.868278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.250 19:11:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:55.250 19:11:32 -- common/autotest_common.sh@850 -- # return 0 00:10:55.250 19:11:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:55.250 19:11:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:55.250 19:11:32 -- common/autotest_common.sh@10 -- # set +x 00:10:55.250 19:11:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.250 19:11:32 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:55.250 19:11:32 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:55.250 19:11:32 -- target/multitarget.sh@21 -- # jq length 00:10:55.570 19:11:32 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:55.570 19:11:32 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:55.570 "nvmf_tgt_1" 00:10:55.570 19:11:32 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:55.570 "nvmf_tgt_2" 00:10:55.570 19:11:32 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:55.570 19:11:32 -- target/multitarget.sh@28 -- # jq length 00:10:55.829 19:11:33 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:55.829 19:11:33 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:55.829 true 00:10:55.829 19:11:33 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:56.087 true 00:10:56.087 19:11:33 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:56.087 19:11:33 -- target/multitarget.sh@35 -- # jq length 00:10:56.087 19:11:33 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:56.087 19:11:33 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:56.087 19:11:33 -- target/multitarget.sh@41 -- # nvmftestfini 00:10:56.087 19:11:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:56.087 19:11:33 -- nvmf/common.sh@116 -- # sync 00:10:56.344 19:11:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:56.344 19:11:33 -- nvmf/common.sh@119 -- # set +e 00:10:56.344 19:11:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:56.344 19:11:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:56.344 rmmod nvme_tcp 00:10:56.344 rmmod nvme_fabrics 00:10:56.344 rmmod nvme_keyring 00:10:56.344 19:11:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:56.344 19:11:33 -- nvmf/common.sh@123 -- # set -e 00:10:56.344 19:11:33 -- nvmf/common.sh@124 -- # return 0 00:10:56.344 19:11:33 -- nvmf/common.sh@477 -- # '[' -n 65817 ']' 00:10:56.344 19:11:33 -- nvmf/common.sh@478 -- # killprocess 65817 00:10:56.344 19:11:33 -- common/autotest_common.sh@924 -- # '[' -z 65817 ']' 00:10:56.344 19:11:33 -- common/autotest_common.sh@928 -- # kill -0 65817 00:10:56.344 19:11:33 -- common/autotest_common.sh@929 -- # uname 00:10:56.344 19:11:33 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:10:56.344 19:11:33 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 65817 00:10:56.344 killing process with pid 65817 00:10:56.344 19:11:33 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:10:56.344 19:11:33 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:10:56.344 19:11:33 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 65817' 00:10:56.344 19:11:33 -- common/autotest_common.sh@943 -- # kill 65817 00:10:56.344 19:11:33 -- common/autotest_common.sh@948 -- # wait 65817 00:10:56.602 19:11:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:56.602 19:11:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:56.602 19:11:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:56.602 19:11:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:56.602 19:11:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:56.602 19:11:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.602 19:11:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:56.603 19:11:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.603 19:11:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:56.603 ************************************ 00:10:56.603 END TEST nvmf_multitarget 00:10:56.603 ************************************ 00:10:56.603 00:10:56.603 real 0m2.951s 00:10:56.603 user 0m9.190s 00:10:56.603 sys 0m0.771s 00:10:56.603 19:11:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:56.603 19:11:33 -- common/autotest_common.sh@10 -- # set +x 00:10:56.861 19:11:34 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:56.861 19:11:34 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:10:56.861 19:11:34 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:56.861 19:11:34 -- common/autotest_common.sh@10 -- # set +x 00:10:56.861 ************************************ 00:10:56.861 START TEST nvmf_rpc 00:10:56.861 ************************************ 00:10:56.861 19:11:34 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:56.861 * Looking for test storage... 00:10:56.861 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:56.861 19:11:34 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:56.861 19:11:34 -- nvmf/common.sh@7 -- # uname -s 00:10:56.861 19:11:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:56.861 19:11:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:56.861 19:11:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:56.861 19:11:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:56.861 19:11:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:56.861 19:11:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:56.861 19:11:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:56.861 19:11:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:56.861 19:11:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:56.861 19:11:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:56.861 19:11:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:10:56.861 19:11:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:10:56.861 19:11:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:56.861 19:11:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:56.861 19:11:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:56.861 19:11:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:56.861 19:11:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:56.861 19:11:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.861 19:11:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.861 19:11:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.861 19:11:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.861 19:11:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.861 19:11:34 -- paths/export.sh@5 -- # export PATH 00:10:56.861 19:11:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.861 19:11:34 -- nvmf/common.sh@46 -- # : 0 00:10:56.861 19:11:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:56.861 19:11:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:56.861 19:11:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:56.861 19:11:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:56.861 19:11:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:56.861 19:11:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:56.861 19:11:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:56.861 19:11:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:56.861 19:11:34 -- target/rpc.sh@11 -- # loops=5 00:10:56.861 19:11:34 -- target/rpc.sh@23 -- # nvmftestinit 00:10:56.861 19:11:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:56.861 19:11:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:56.861 19:11:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:56.861 19:11:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:56.861 19:11:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:56.861 19:11:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.861 19:11:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:56.861 19:11:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.861 19:11:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:56.862 19:11:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:56.862 19:11:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:56.862 19:11:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:56.862 19:11:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:56.862 19:11:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:56.862 19:11:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:56.862 19:11:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:56.862 19:11:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:56.862 19:11:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:56.862 19:11:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:56.862 19:11:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:56.862 19:11:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:56.862 19:11:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:56.862 19:11:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:56.862 19:11:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:56.862 19:11:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:56.862 19:11:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:56.862 19:11:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:56.862 19:11:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:56.862 Cannot find device "nvmf_tgt_br" 00:10:56.862 19:11:34 -- nvmf/common.sh@154 -- # true 00:10:56.862 19:11:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:56.862 Cannot find device "nvmf_tgt_br2" 00:10:56.862 19:11:34 -- nvmf/common.sh@155 -- # true 00:10:56.862 19:11:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:56.862 19:11:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:56.862 Cannot find device "nvmf_tgt_br" 00:10:56.862 19:11:34 -- nvmf/common.sh@157 -- # true 00:10:56.862 19:11:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:56.862 Cannot find device "nvmf_tgt_br2" 00:10:56.862 19:11:34 -- nvmf/common.sh@158 -- # true 00:10:56.862 19:11:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:56.862 19:11:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:57.120 19:11:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:57.120 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:57.120 19:11:34 -- nvmf/common.sh@161 -- # true 00:10:57.120 19:11:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:57.120 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:57.120 19:11:34 -- nvmf/common.sh@162 -- # true 00:10:57.120 19:11:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:57.120 19:11:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:57.120 19:11:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:57.120 19:11:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:57.120 19:11:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:57.120 19:11:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:57.120 19:11:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:57.120 19:11:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:57.120 19:11:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:57.120 19:11:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:57.120 19:11:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:57.120 19:11:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:57.120 19:11:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:57.120 19:11:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:57.120 19:11:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:57.120 19:11:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:57.120 19:11:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:57.120 19:11:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:57.120 19:11:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:57.120 19:11:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:57.120 19:11:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:57.120 19:11:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:57.120 19:11:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:57.120 19:11:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:57.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:57.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:10:57.120 00:10:57.120 --- 10.0.0.2 ping statistics --- 00:10:57.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.120 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:10:57.120 19:11:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:57.121 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:57.121 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:10:57.121 00:10:57.121 --- 10.0.0.3 ping statistics --- 00:10:57.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.121 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:10:57.121 19:11:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:57.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:57.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:10:57.121 00:10:57.121 --- 10.0.0.1 ping statistics --- 00:10:57.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.121 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:10:57.121 19:11:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:57.121 19:11:34 -- nvmf/common.sh@421 -- # return 0 00:10:57.121 19:11:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:57.121 19:11:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:57.121 19:11:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:57.121 19:11:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:57.121 19:11:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:57.121 19:11:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:57.121 19:11:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:57.379 19:11:34 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:57.379 19:11:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:57.379 19:11:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:57.379 19:11:34 -- common/autotest_common.sh@10 -- # set +x 00:10:57.379 19:11:34 -- nvmf/common.sh@469 -- # nvmfpid=66049 00:10:57.379 19:11:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:57.379 19:11:34 -- nvmf/common.sh@470 -- # waitforlisten 66049 00:10:57.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.379 19:11:34 -- common/autotest_common.sh@817 -- # '[' -z 66049 ']' 00:10:57.379 19:11:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.379 19:11:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:57.379 19:11:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.379 19:11:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:57.379 19:11:34 -- common/autotest_common.sh@10 -- # set +x 00:10:57.379 [2024-02-14 19:11:34.605472] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:10:57.379 [2024-02-14 19:11:34.605877] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.379 [2024-02-14 19:11:34.742920] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:57.637 [2024-02-14 19:11:34.906196] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:57.637 [2024-02-14 19:11:34.906834] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:57.637 [2024-02-14 19:11:34.906913] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:57.637 [2024-02-14 19:11:34.907068] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:57.637 [2024-02-14 19:11:34.907323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.637 [2024-02-14 19:11:34.907958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.637 [2024-02-14 19:11:34.908148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.637 [2024-02-14 19:11:34.908157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.205 19:11:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:58.205 19:11:35 -- common/autotest_common.sh@850 -- # return 0 00:10:58.205 19:11:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:58.205 19:11:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:58.205 19:11:35 -- common/autotest_common.sh@10 -- # set +x 00:10:58.205 19:11:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:58.205 19:11:35 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:58.205 19:11:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:58.205 19:11:35 -- common/autotest_common.sh@10 -- # set +x 00:10:58.205 19:11:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:58.205 19:11:35 -- target/rpc.sh@26 -- # stats='{ 00:10:58.205 "poll_groups": [ 00:10:58.205 { 00:10:58.205 "admin_qpairs": 0, 00:10:58.205 "completed_nvme_io": 0, 00:10:58.205 "current_admin_qpairs": 0, 00:10:58.205 "current_io_qpairs": 0, 00:10:58.205 "io_qpairs": 0, 00:10:58.205 "name": "nvmf_tgt_poll_group_0", 00:10:58.205 "pending_bdev_io": 0, 00:10:58.205 "transports": [] 00:10:58.205 }, 00:10:58.205 { 00:10:58.205 "admin_qpairs": 0, 00:10:58.205 "completed_nvme_io": 0, 00:10:58.205 "current_admin_qpairs": 0, 00:10:58.205 "current_io_qpairs": 0, 00:10:58.205 "io_qpairs": 0, 00:10:58.205 "name": "nvmf_tgt_poll_group_1", 00:10:58.205 "pending_bdev_io": 0, 00:10:58.205 "transports": [] 00:10:58.205 }, 00:10:58.205 { 00:10:58.205 "admin_qpairs": 0, 00:10:58.205 "completed_nvme_io": 0, 00:10:58.205 "current_admin_qpairs": 0, 00:10:58.205 "current_io_qpairs": 0, 00:10:58.205 "io_qpairs": 0, 00:10:58.205 "name": "nvmf_tgt_poll_group_2", 00:10:58.205 "pending_bdev_io": 0, 00:10:58.205 "transports": [] 00:10:58.205 }, 00:10:58.205 { 00:10:58.205 "admin_qpairs": 0, 00:10:58.205 "completed_nvme_io": 0, 00:10:58.205 "current_admin_qpairs": 0, 00:10:58.205 "current_io_qpairs": 0, 00:10:58.205 "io_qpairs": 0, 00:10:58.205 "name": "nvmf_tgt_poll_group_3", 00:10:58.205 "pending_bdev_io": 0, 00:10:58.205 "transports": [] 00:10:58.205 } 00:10:58.205 ], 00:10:58.205 "tick_rate": 2200000000 00:10:58.205 }' 00:10:58.205 19:11:35 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:58.205 19:11:35 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:58.205 19:11:35 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:58.205 19:11:35 -- target/rpc.sh@15 -- # wc -l 00:10:58.464 19:11:35 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:58.464 19:11:35 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:58.464 19:11:35 -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:58.464 19:11:35 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:58.464 19:11:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:58.464 19:11:35 -- common/autotest_common.sh@10 -- # set +x 00:10:58.464 [2024-02-14 19:11:35.700809] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:58.464 19:11:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:58.464 19:11:35 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:58.464 19:11:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:58.464 19:11:35 -- common/autotest_common.sh@10 -- # set +x 00:10:58.464 19:11:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:58.464 19:11:35 -- target/rpc.sh@33 -- # stats='{ 00:10:58.464 "poll_groups": [ 00:10:58.464 { 00:10:58.464 "admin_qpairs": 0, 00:10:58.464 "completed_nvme_io": 0, 00:10:58.465 "current_admin_qpairs": 0, 00:10:58.465 "current_io_qpairs": 0, 00:10:58.465 "io_qpairs": 0, 00:10:58.465 "name": "nvmf_tgt_poll_group_0", 00:10:58.465 "pending_bdev_io": 0, 00:10:58.465 "transports": [ 00:10:58.465 { 00:10:58.465 "trtype": "TCP" 00:10:58.465 } 00:10:58.465 ] 00:10:58.465 }, 00:10:58.465 { 00:10:58.465 "admin_qpairs": 0, 00:10:58.465 "completed_nvme_io": 0, 00:10:58.465 "current_admin_qpairs": 0, 00:10:58.465 "current_io_qpairs": 0, 00:10:58.465 "io_qpairs": 0, 00:10:58.465 "name": "nvmf_tgt_poll_group_1", 00:10:58.465 "pending_bdev_io": 0, 00:10:58.465 "transports": [ 00:10:58.465 { 00:10:58.465 "trtype": "TCP" 00:10:58.465 } 00:10:58.465 ] 00:10:58.465 }, 00:10:58.465 { 00:10:58.465 "admin_qpairs": 0, 00:10:58.465 "completed_nvme_io": 0, 00:10:58.465 "current_admin_qpairs": 0, 00:10:58.465 "current_io_qpairs": 0, 00:10:58.465 "io_qpairs": 0, 00:10:58.465 "name": "nvmf_tgt_poll_group_2", 00:10:58.465 "pending_bdev_io": 0, 00:10:58.465 "transports": [ 00:10:58.465 { 00:10:58.465 "trtype": "TCP" 00:10:58.465 } 00:10:58.465 ] 00:10:58.465 }, 00:10:58.465 { 00:10:58.465 "admin_qpairs": 0, 00:10:58.465 "completed_nvme_io": 0, 00:10:58.465 "current_admin_qpairs": 0, 00:10:58.465 "current_io_qpairs": 0, 00:10:58.465 "io_qpairs": 0, 00:10:58.465 "name": "nvmf_tgt_poll_group_3", 00:10:58.465 "pending_bdev_io": 0, 00:10:58.465 "transports": [ 00:10:58.465 { 00:10:58.465 "trtype": "TCP" 00:10:58.465 } 00:10:58.465 ] 00:10:58.465 } 00:10:58.465 ], 00:10:58.465 "tick_rate": 2200000000 00:10:58.465 }' 00:10:58.465 19:11:35 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:58.465 19:11:35 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:58.465 19:11:35 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:58.465 19:11:35 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:58.465 19:11:35 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:58.465 19:11:35 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:58.465 19:11:35 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:58.465 19:11:35 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:58.465 19:11:35 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:58.465 19:11:35 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:58.465 19:11:35 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:58.465 19:11:35 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:58.465 19:11:35 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:58.465 19:11:35 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:58.465 19:11:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:58.465 19:11:35 -- common/autotest_common.sh@10 -- # set +x 00:10:58.724 Malloc1 00:10:58.725 19:11:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:58.725 19:11:35 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:58.725 19:11:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:58.725 19:11:35 -- common/autotest_common.sh@10 -- # set +x 00:10:58.725 19:11:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:58.725 19:11:35 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:58.725 19:11:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:58.725 19:11:35 -- common/autotest_common.sh@10 -- # set +x 00:10:58.725 19:11:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:58.725 19:11:35 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:58.725 19:11:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:58.725 19:11:35 -- common/autotest_common.sh@10 -- # set +x 00:10:58.725 19:11:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:58.725 19:11:35 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:58.725 19:11:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:58.725 19:11:35 -- common/autotest_common.sh@10 -- # set +x 00:10:58.725 [2024-02-14 19:11:35.918245] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:58.725 19:11:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:58.725 19:11:35 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef -a 10.0.0.2 -s 4420 00:10:58.725 19:11:35 -- common/autotest_common.sh@638 -- # local es=0 00:10:58.725 19:11:35 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef -a 10.0.0.2 -s 4420 00:10:58.725 19:11:35 -- common/autotest_common.sh@626 -- # local arg=nvme 00:10:58.725 19:11:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:58.725 19:11:35 -- common/autotest_common.sh@630 -- # type -t nvme 00:10:58.725 19:11:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:58.725 19:11:35 -- common/autotest_common.sh@632 -- # type -P nvme 00:10:58.725 19:11:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:58.725 19:11:35 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:10:58.725 19:11:35 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:10:58.725 19:11:35 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef -a 10.0.0.2 -s 4420 00:10:58.725 [2024-02-14 19:11:35.940609] ctrlr.c: 742:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef' 00:10:58.725 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:58.725 could not add new controller: failed to write to nvme-fabrics device 00:10:58.725 19:11:35 -- common/autotest_common.sh@641 -- # es=1 00:10:58.725 19:11:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:58.725 19:11:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:58.725 19:11:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:58.725 19:11:35 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:10:58.725 19:11:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:58.725 19:11:35 -- common/autotest_common.sh@10 -- # set +x 00:10:58.725 19:11:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:58.725 19:11:35 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:58.725 19:11:36 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:58.725 19:11:36 -- common/autotest_common.sh@1175 -- # local i=0 00:10:58.725 19:11:36 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:10:58.725 19:11:36 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:10:58.725 19:11:36 -- common/autotest_common.sh@1182 -- # sleep 2 00:11:01.259 19:11:38 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:11:01.259 19:11:38 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:11:01.259 19:11:38 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:11:01.259 19:11:38 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:11:01.259 19:11:38 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:11:01.259 19:11:38 -- common/autotest_common.sh@1185 -- # return 0 00:11:01.259 19:11:38 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:01.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.259 19:11:38 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:01.259 19:11:38 -- common/autotest_common.sh@1196 -- # local i=0 00:11:01.259 19:11:38 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:11:01.259 19:11:38 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.259 19:11:38 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.259 19:11:38 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:11:01.259 19:11:38 -- common/autotest_common.sh@1208 -- # return 0 00:11:01.259 19:11:38 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:11:01.259 19:11:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:01.259 19:11:38 -- common/autotest_common.sh@10 -- # set +x 00:11:01.259 19:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:01.259 19:11:38 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:01.259 19:11:38 -- common/autotest_common.sh@638 -- # local es=0 00:11:01.259 19:11:38 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:01.259 19:11:38 -- common/autotest_common.sh@626 -- # local arg=nvme 00:11:01.259 19:11:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:01.259 19:11:38 -- common/autotest_common.sh@630 -- # type -t nvme 00:11:01.259 19:11:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:01.259 19:11:38 -- common/autotest_common.sh@632 -- # type -P nvme 00:11:01.259 19:11:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:01.259 19:11:38 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:11:01.259 19:11:38 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:11:01.259 19:11:38 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:01.259 [2024-02-14 19:11:38.242290] ctrlr.c: 742:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef' 00:11:01.259 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:01.259 could not add new controller: failed to write to nvme-fabrics device 00:11:01.259 19:11:38 -- common/autotest_common.sh@641 -- # es=1 00:11:01.259 19:11:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:01.259 19:11:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:01.259 19:11:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:01.259 19:11:38 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:01.259 19:11:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:01.259 19:11:38 -- common/autotest_common.sh@10 -- # set +x 00:11:01.259 19:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:01.259 19:11:38 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:01.259 19:11:38 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:01.259 19:11:38 -- common/autotest_common.sh@1175 -- # local i=0 00:11:01.260 19:11:38 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:11:01.260 19:11:38 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:11:01.260 19:11:38 -- common/autotest_common.sh@1182 -- # sleep 2 00:11:03.164 19:11:40 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:11:03.164 19:11:40 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:11:03.164 19:11:40 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:11:03.164 19:11:40 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:11:03.164 19:11:40 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:11:03.164 19:11:40 -- common/autotest_common.sh@1185 -- # return 0 00:11:03.164 19:11:40 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:03.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.164 19:11:40 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:03.164 19:11:40 -- common/autotest_common.sh@1196 -- # local i=0 00:11:03.164 19:11:40 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:11:03.164 19:11:40 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:03.164 19:11:40 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:11:03.164 19:11:40 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:03.164 19:11:40 -- common/autotest_common.sh@1208 -- # return 0 00:11:03.164 19:11:40 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:03.164 19:11:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.164 19:11:40 -- common/autotest_common.sh@10 -- # set +x 00:11:03.164 19:11:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:03.164 19:11:40 -- target/rpc.sh@81 -- # seq 1 5 00:11:03.164 19:11:40 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:03.164 19:11:40 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:03.164 19:11:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.164 19:11:40 -- common/autotest_common.sh@10 -- # set +x 00:11:03.164 19:11:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:03.164 19:11:40 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:03.164 19:11:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.164 19:11:40 -- common/autotest_common.sh@10 -- # set +x 00:11:03.164 [2024-02-14 19:11:40.527784] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:03.164 19:11:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:03.164 19:11:40 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:03.164 19:11:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.164 19:11:40 -- common/autotest_common.sh@10 -- # set +x 00:11:03.164 19:11:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:03.164 19:11:40 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:03.164 19:11:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.164 19:11:40 -- common/autotest_common.sh@10 -- # set +x 00:11:03.164 19:11:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:03.164 19:11:40 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:03.423 19:11:40 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:03.423 19:11:40 -- common/autotest_common.sh@1175 -- # local i=0 00:11:03.423 19:11:40 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:11:03.423 19:11:40 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:11:03.423 19:11:40 -- common/autotest_common.sh@1182 -- # sleep 2 00:11:05.325 19:11:42 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:11:05.325 19:11:42 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:11:05.325 19:11:42 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:11:05.325 19:11:42 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:11:05.325 19:11:42 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:11:05.325 19:11:42 -- common/autotest_common.sh@1185 -- # return 0 00:11:05.325 19:11:42 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:05.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.583 19:11:42 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:05.583 19:11:42 -- common/autotest_common.sh@1196 -- # local i=0 00:11:05.583 19:11:42 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:11:05.583 19:11:42 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:05.583 19:11:42 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:11:05.583 19:11:42 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:05.583 19:11:42 -- common/autotest_common.sh@1208 -- # return 0 00:11:05.583 19:11:42 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:05.583 19:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:05.583 19:11:42 -- common/autotest_common.sh@10 -- # set +x 00:11:05.583 19:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:05.583 19:11:42 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:05.583 19:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:05.583 19:11:42 -- common/autotest_common.sh@10 -- # set +x 00:11:05.583 19:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:05.583 19:11:42 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:05.583 19:11:42 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:05.583 19:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:05.583 19:11:42 -- common/autotest_common.sh@10 -- # set +x 00:11:05.583 19:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:05.583 19:11:42 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.583 19:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:05.583 19:11:42 -- common/autotest_common.sh@10 -- # set +x 00:11:05.583 [2024-02-14 19:11:42.824394] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.583 19:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:05.583 19:11:42 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:05.583 19:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:05.583 19:11:42 -- common/autotest_common.sh@10 -- # set +x 00:11:05.583 19:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:05.583 19:11:42 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:05.583 19:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:05.583 19:11:42 -- common/autotest_common.sh@10 -- # set +x 00:11:05.583 19:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:05.583 19:11:42 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:05.841 19:11:43 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:05.841 19:11:43 -- common/autotest_common.sh@1175 -- # local i=0 00:11:05.841 19:11:43 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:11:05.841 19:11:43 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:11:05.841 19:11:43 -- common/autotest_common.sh@1182 -- # sleep 2 00:11:07.739 19:11:45 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:11:07.739 19:11:45 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:11:07.739 19:11:45 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:11:07.739 19:11:45 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:11:07.739 19:11:45 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:11:07.739 19:11:45 -- common/autotest_common.sh@1185 -- # return 0 00:11:07.739 19:11:45 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:07.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.739 19:11:45 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:07.739 19:11:45 -- common/autotest_common.sh@1196 -- # local i=0 00:11:07.739 19:11:45 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:11:07.739 19:11:45 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:07.739 19:11:45 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:11:07.739 19:11:45 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:07.739 19:11:45 -- common/autotest_common.sh@1208 -- # return 0 00:11:07.739 19:11:45 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:07.739 19:11:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:07.739 19:11:45 -- common/autotest_common.sh@10 -- # set +x 00:11:07.739 19:11:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:07.739 19:11:45 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.739 19:11:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:07.739 19:11:45 -- common/autotest_common.sh@10 -- # set +x 00:11:07.740 19:11:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:07.740 19:11:45 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:07.740 19:11:45 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:07.740 19:11:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:07.740 19:11:45 -- common/autotest_common.sh@10 -- # set +x 00:11:07.740 19:11:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:07.740 19:11:45 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.740 19:11:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:07.740 19:11:45 -- common/autotest_common.sh@10 -- # set +x 00:11:07.740 [2024-02-14 19:11:45.129216] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.740 19:11:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:07.740 19:11:45 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:07.740 19:11:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:07.740 19:11:45 -- common/autotest_common.sh@10 -- # set +x 00:11:07.740 19:11:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:07.740 19:11:45 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:07.740 19:11:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:07.740 19:11:45 -- common/autotest_common.sh@10 -- # set +x 00:11:07.740 19:11:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:07.740 19:11:45 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:07.998 19:11:45 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:07.998 19:11:45 -- common/autotest_common.sh@1175 -- # local i=0 00:11:07.998 19:11:45 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:11:07.998 19:11:45 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:11:07.998 19:11:45 -- common/autotest_common.sh@1182 -- # sleep 2 00:11:09.925 19:11:47 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:11:09.925 19:11:47 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:11:09.925 19:11:47 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:11:09.925 19:11:47 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:11:09.925 19:11:47 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:11:09.925 19:11:47 -- common/autotest_common.sh@1185 -- # return 0 00:11:09.925 19:11:47 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:10.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.184 19:11:47 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:10.184 19:11:47 -- common/autotest_common.sh@1196 -- # local i=0 00:11:10.184 19:11:47 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:11:10.184 19:11:47 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.184 19:11:47 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:11:10.184 19:11:47 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.184 19:11:47 -- common/autotest_common.sh@1208 -- # return 0 00:11:10.184 19:11:47 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:10.184 19:11:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:10.184 19:11:47 -- common/autotest_common.sh@10 -- # set +x 00:11:10.184 19:11:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:10.184 19:11:47 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:10.184 19:11:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:10.184 19:11:47 -- common/autotest_common.sh@10 -- # set +x 00:11:10.184 19:11:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:10.184 19:11:47 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:10.184 19:11:47 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:10.184 19:11:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:10.184 19:11:47 -- common/autotest_common.sh@10 -- # set +x 00:11:10.184 19:11:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:10.184 19:11:47 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:10.184 19:11:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:10.184 19:11:47 -- common/autotest_common.sh@10 -- # set +x 00:11:10.184 [2024-02-14 19:11:47.438175] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:10.184 19:11:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:10.184 19:11:47 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:10.184 19:11:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:10.184 19:11:47 -- common/autotest_common.sh@10 -- # set +x 00:11:10.184 19:11:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:10.184 19:11:47 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:10.184 19:11:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:10.184 19:11:47 -- common/autotest_common.sh@10 -- # set +x 00:11:10.184 19:11:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:10.184 19:11:47 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:10.443 19:11:47 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:10.443 19:11:47 -- common/autotest_common.sh@1175 -- # local i=0 00:11:10.443 19:11:47 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:11:10.443 19:11:47 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:11:10.443 19:11:47 -- common/autotest_common.sh@1182 -- # sleep 2 00:11:12.347 19:11:49 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:11:12.347 19:11:49 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:11:12.347 19:11:49 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:11:12.347 19:11:49 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:11:12.347 19:11:49 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:11:12.347 19:11:49 -- common/autotest_common.sh@1185 -- # return 0 00:11:12.347 19:11:49 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:12.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.347 19:11:49 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:12.347 19:11:49 -- common/autotest_common.sh@1196 -- # local i=0 00:11:12.347 19:11:49 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:11:12.347 19:11:49 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:12.347 19:11:49 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:11:12.347 19:11:49 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:12.347 19:11:49 -- common/autotest_common.sh@1208 -- # return 0 00:11:12.347 19:11:49 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:12.347 19:11:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:12.347 19:11:49 -- common/autotest_common.sh@10 -- # set +x 00:11:12.347 19:11:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:12.347 19:11:49 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:12.347 19:11:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:12.347 19:11:49 -- common/autotest_common.sh@10 -- # set +x 00:11:12.347 19:11:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:12.348 19:11:49 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:12.348 19:11:49 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:12.348 19:11:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:12.348 19:11:49 -- common/autotest_common.sh@10 -- # set +x 00:11:12.348 19:11:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:12.348 19:11:49 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:12.348 19:11:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:12.348 19:11:49 -- common/autotest_common.sh@10 -- # set +x 00:11:12.348 [2024-02-14 19:11:49.735162] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:12.348 19:11:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:12.348 19:11:49 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:12.348 19:11:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:12.348 19:11:49 -- common/autotest_common.sh@10 -- # set +x 00:11:12.348 19:11:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:12.348 19:11:49 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:12.348 19:11:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:12.348 19:11:49 -- common/autotest_common.sh@10 -- # set +x 00:11:12.348 19:11:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:12.348 19:11:49 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:12.606 19:11:49 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:12.606 19:11:49 -- common/autotest_common.sh@1175 -- # local i=0 00:11:12.606 19:11:49 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:11:12.606 19:11:49 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:11:12.606 19:11:49 -- common/autotest_common.sh@1182 -- # sleep 2 00:11:15.138 19:11:51 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:11:15.138 19:11:51 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:11:15.138 19:11:51 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:11:15.138 19:11:51 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:11:15.138 19:11:51 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:11:15.138 19:11:51 -- common/autotest_common.sh@1185 -- # return 0 00:11:15.138 19:11:51 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:15.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.138 19:11:51 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:15.138 19:11:51 -- common/autotest_common.sh@1196 -- # local i=0 00:11:15.138 19:11:51 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:11:15.138 19:11:51 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.138 19:11:51 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:11:15.138 19:11:51 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.138 19:11:52 -- common/autotest_common.sh@1208 -- # return 0 00:11:15.138 19:11:52 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:15.138 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.138 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.138 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.138 19:11:52 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@99 -- # seq 1 5 00:11:15.139 19:11:52 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:15.139 19:11:52 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 [2024-02-14 19:11:52.047727] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:15.139 19:11:52 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 [2024-02-14 19:11:52.095726] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:15.139 19:11:52 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 [2024-02-14 19:11:52.143796] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:15.139 19:11:52 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 [2024-02-14 19:11:52.191838] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:15.139 19:11:52 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 [2024-02-14 19:11:52.239949] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:15.139 19:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.139 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 19:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.139 19:11:52 -- target/rpc.sh@110 -- # stats='{ 00:11:15.139 "poll_groups": [ 00:11:15.139 { 00:11:15.139 "admin_qpairs": 2, 00:11:15.139 "completed_nvme_io": 66, 00:11:15.139 "current_admin_qpairs": 0, 00:11:15.139 "current_io_qpairs": 0, 00:11:15.139 "io_qpairs": 16, 00:11:15.139 "name": "nvmf_tgt_poll_group_0", 00:11:15.139 "pending_bdev_io": 0, 00:11:15.139 "transports": [ 00:11:15.140 { 00:11:15.140 "trtype": "TCP" 00:11:15.140 } 00:11:15.140 ] 00:11:15.140 }, 00:11:15.140 { 00:11:15.140 "admin_qpairs": 3, 00:11:15.140 "completed_nvme_io": 67, 00:11:15.140 "current_admin_qpairs": 0, 00:11:15.140 "current_io_qpairs": 0, 00:11:15.140 "io_qpairs": 17, 00:11:15.140 "name": "nvmf_tgt_poll_group_1", 00:11:15.140 "pending_bdev_io": 0, 00:11:15.140 "transports": [ 00:11:15.140 { 00:11:15.140 "trtype": "TCP" 00:11:15.140 } 00:11:15.140 ] 00:11:15.140 }, 00:11:15.140 { 00:11:15.140 "admin_qpairs": 1, 00:11:15.140 "completed_nvme_io": 119, 00:11:15.140 "current_admin_qpairs": 0, 00:11:15.140 "current_io_qpairs": 0, 00:11:15.140 "io_qpairs": 19, 00:11:15.140 "name": "nvmf_tgt_poll_group_2", 00:11:15.140 "pending_bdev_io": 0, 00:11:15.140 "transports": [ 00:11:15.140 { 00:11:15.140 "trtype": "TCP" 00:11:15.140 } 00:11:15.140 ] 00:11:15.140 }, 00:11:15.140 { 00:11:15.140 "admin_qpairs": 1, 00:11:15.140 "completed_nvme_io": 168, 00:11:15.140 "current_admin_qpairs": 0, 00:11:15.140 "current_io_qpairs": 0, 00:11:15.140 "io_qpairs": 18, 00:11:15.140 "name": "nvmf_tgt_poll_group_3", 00:11:15.140 "pending_bdev_io": 0, 00:11:15.140 "transports": [ 00:11:15.140 { 00:11:15.140 "trtype": "TCP" 00:11:15.140 } 00:11:15.140 ] 00:11:15.140 } 00:11:15.140 ], 00:11:15.140 "tick_rate": 2200000000 00:11:15.140 }' 00:11:15.140 19:11:52 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:15.140 19:11:52 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:15.140 19:11:52 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:15.140 19:11:52 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:15.140 19:11:52 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:15.140 19:11:52 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:15.140 19:11:52 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:15.140 19:11:52 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:15.140 19:11:52 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:15.140 19:11:52 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:11:15.140 19:11:52 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:15.140 19:11:52 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:15.140 19:11:52 -- target/rpc.sh@123 -- # nvmftestfini 00:11:15.140 19:11:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:15.140 19:11:52 -- nvmf/common.sh@116 -- # sync 00:11:15.140 19:11:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:15.140 19:11:52 -- nvmf/common.sh@119 -- # set +e 00:11:15.140 19:11:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:15.140 19:11:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:15.140 rmmod nvme_tcp 00:11:15.140 rmmod nvme_fabrics 00:11:15.140 rmmod nvme_keyring 00:11:15.140 19:11:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:15.140 19:11:52 -- nvmf/common.sh@123 -- # set -e 00:11:15.140 19:11:52 -- nvmf/common.sh@124 -- # return 0 00:11:15.140 19:11:52 -- nvmf/common.sh@477 -- # '[' -n 66049 ']' 00:11:15.140 19:11:52 -- nvmf/common.sh@478 -- # killprocess 66049 00:11:15.140 19:11:52 -- common/autotest_common.sh@924 -- # '[' -z 66049 ']' 00:11:15.140 19:11:52 -- common/autotest_common.sh@928 -- # kill -0 66049 00:11:15.140 19:11:52 -- common/autotest_common.sh@929 -- # uname 00:11:15.140 19:11:52 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:11:15.140 19:11:52 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 66049 00:11:15.140 killing process with pid 66049 00:11:15.140 19:11:52 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:11:15.140 19:11:52 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:11:15.140 19:11:52 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 66049' 00:11:15.140 19:11:52 -- common/autotest_common.sh@943 -- # kill 66049 00:11:15.140 19:11:52 -- common/autotest_common.sh@948 -- # wait 66049 00:11:15.708 19:11:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:15.708 19:11:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:15.708 19:11:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:15.708 19:11:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:15.708 19:11:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:15.708 19:11:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.708 19:11:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:15.708 19:11:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.708 19:11:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:15.708 00:11:15.708 real 0m18.862s 00:11:15.708 user 1m10.808s 00:11:15.708 sys 0m2.127s 00:11:15.708 19:11:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:15.708 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.708 ************************************ 00:11:15.708 END TEST nvmf_rpc 00:11:15.708 ************************************ 00:11:15.708 19:11:52 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:15.708 19:11:52 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:11:15.708 19:11:52 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:15.708 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.708 ************************************ 00:11:15.708 START TEST nvmf_invalid 00:11:15.708 ************************************ 00:11:15.708 19:11:52 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:15.708 * Looking for test storage... 00:11:15.708 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:15.708 19:11:53 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:15.708 19:11:53 -- nvmf/common.sh@7 -- # uname -s 00:11:15.708 19:11:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:15.708 19:11:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:15.708 19:11:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:15.708 19:11:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:15.708 19:11:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:15.708 19:11:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:15.708 19:11:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:15.708 19:11:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:15.708 19:11:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:15.708 19:11:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:15.708 19:11:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:11:15.708 19:11:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:11:15.708 19:11:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:15.708 19:11:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:15.708 19:11:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:15.708 19:11:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:15.708 19:11:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.708 19:11:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.708 19:11:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.708 19:11:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.709 19:11:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.709 19:11:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.709 19:11:53 -- paths/export.sh@5 -- # export PATH 00:11:15.709 19:11:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.709 19:11:53 -- nvmf/common.sh@46 -- # : 0 00:11:15.709 19:11:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:15.709 19:11:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:15.709 19:11:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:15.709 19:11:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:15.709 19:11:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:15.709 19:11:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:15.709 19:11:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:15.709 19:11:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:15.709 19:11:53 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:11:15.709 19:11:53 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:15.709 19:11:53 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:15.709 19:11:53 -- target/invalid.sh@14 -- # target=foobar 00:11:15.709 19:11:53 -- target/invalid.sh@16 -- # RANDOM=0 00:11:15.709 19:11:53 -- target/invalid.sh@34 -- # nvmftestinit 00:11:15.709 19:11:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:15.709 19:11:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:15.709 19:11:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:15.709 19:11:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:15.709 19:11:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:15.709 19:11:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.709 19:11:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:15.709 19:11:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.709 19:11:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:15.709 19:11:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:15.709 19:11:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:15.709 19:11:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:15.709 19:11:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:15.709 19:11:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:15.709 19:11:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:15.709 19:11:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:15.709 19:11:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:15.709 19:11:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:15.709 19:11:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:15.709 19:11:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:15.709 19:11:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:15.709 19:11:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:15.709 19:11:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:15.709 19:11:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:15.709 19:11:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:15.709 19:11:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:15.709 19:11:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:15.709 19:11:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:15.709 Cannot find device "nvmf_tgt_br" 00:11:15.709 19:11:53 -- nvmf/common.sh@154 -- # true 00:11:15.709 19:11:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:15.709 Cannot find device "nvmf_tgt_br2" 00:11:15.709 19:11:53 -- nvmf/common.sh@155 -- # true 00:11:15.709 19:11:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:15.968 19:11:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:15.968 Cannot find device "nvmf_tgt_br" 00:11:15.968 19:11:53 -- nvmf/common.sh@157 -- # true 00:11:15.968 19:11:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:15.968 Cannot find device "nvmf_tgt_br2" 00:11:15.968 19:11:53 -- nvmf/common.sh@158 -- # true 00:11:15.968 19:11:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:15.968 19:11:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:15.968 19:11:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:15.968 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:15.968 19:11:53 -- nvmf/common.sh@161 -- # true 00:11:15.968 19:11:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:15.968 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:15.968 19:11:53 -- nvmf/common.sh@162 -- # true 00:11:15.968 19:11:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:15.968 19:11:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:15.968 19:11:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:15.968 19:11:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:15.968 19:11:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:15.968 19:11:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:15.968 19:11:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:15.968 19:11:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:15.968 19:11:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:15.968 19:11:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:15.968 19:11:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:15.968 19:11:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:15.968 19:11:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:15.968 19:11:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:15.968 19:11:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:15.968 19:11:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:15.968 19:11:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:15.968 19:11:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:15.968 19:11:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:15.968 19:11:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:15.968 19:11:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:15.968 19:11:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:15.968 19:11:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:15.968 19:11:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:16.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:16.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:11:16.226 00:11:16.226 --- 10.0.0.2 ping statistics --- 00:11:16.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.226 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:11:16.226 19:11:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:16.226 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:16.227 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:11:16.227 00:11:16.227 --- 10.0.0.3 ping statistics --- 00:11:16.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.227 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:11:16.227 19:11:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:16.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:16.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:11:16.227 00:11:16.227 --- 10.0.0.1 ping statistics --- 00:11:16.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.227 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:11:16.227 19:11:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:16.227 19:11:53 -- nvmf/common.sh@421 -- # return 0 00:11:16.227 19:11:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:16.227 19:11:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:16.227 19:11:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:16.227 19:11:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:16.227 19:11:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:16.227 19:11:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:16.227 19:11:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:16.227 19:11:53 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:16.227 19:11:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:16.227 19:11:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:16.227 19:11:53 -- common/autotest_common.sh@10 -- # set +x 00:11:16.227 19:11:53 -- nvmf/common.sh@469 -- # nvmfpid=66555 00:11:16.227 19:11:53 -- nvmf/common.sh@470 -- # waitforlisten 66555 00:11:16.227 19:11:53 -- common/autotest_common.sh@817 -- # '[' -z 66555 ']' 00:11:16.227 19:11:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.227 19:11:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:16.227 19:11:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.227 19:11:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:16.227 19:11:53 -- common/autotest_common.sh@10 -- # set +x 00:11:16.227 19:11:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:16.227 [2024-02-14 19:11:53.496384] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:11:16.227 [2024-02-14 19:11:53.496516] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.227 [2024-02-14 19:11:53.636240] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:16.485 [2024-02-14 19:11:53.778993] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:16.485 [2024-02-14 19:11:53.779177] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:16.485 [2024-02-14 19:11:53.779190] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:16.485 [2024-02-14 19:11:53.779199] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:16.485 [2024-02-14 19:11:53.779520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.485 [2024-02-14 19:11:53.779850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.485 [2024-02-14 19:11:53.780028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.485 [2024-02-14 19:11:53.780035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.421 19:11:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:17.421 19:11:54 -- common/autotest_common.sh@850 -- # return 0 00:11:17.421 19:11:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:17.421 19:11:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:17.421 19:11:54 -- common/autotest_common.sh@10 -- # set +x 00:11:17.421 19:11:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:17.421 19:11:54 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:17.421 19:11:54 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode29006 00:11:17.421 [2024-02-14 19:11:54.772207] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:17.421 19:11:54 -- target/invalid.sh@40 -- # out='2024/02/14 19:11:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode29006 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:11:17.421 request: 00:11:17.421 { 00:11:17.421 "method": "nvmf_create_subsystem", 00:11:17.421 "params": { 00:11:17.421 "nqn": "nqn.2016-06.io.spdk:cnode29006", 00:11:17.421 "tgt_name": "foobar" 00:11:17.421 } 00:11:17.421 } 00:11:17.421 Got JSON-RPC error response 00:11:17.421 GoRPCClient: error on JSON-RPC call' 00:11:17.421 19:11:54 -- target/invalid.sh@41 -- # [[ 2024/02/14 19:11:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode29006 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:11:17.421 request: 00:11:17.421 { 00:11:17.421 "method": "nvmf_create_subsystem", 00:11:17.421 "params": { 00:11:17.421 "nqn": "nqn.2016-06.io.spdk:cnode29006", 00:11:17.421 "tgt_name": "foobar" 00:11:17.421 } 00:11:17.421 } 00:11:17.421 Got JSON-RPC error response 00:11:17.421 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:17.421 19:11:54 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:17.421 19:11:54 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode10264 00:11:17.680 [2024-02-14 19:11:55.048594] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10264: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:17.680 19:11:55 -- target/invalid.sh@45 -- # out='2024/02/14 19:11:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode10264 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:11:17.680 request: 00:11:17.680 { 00:11:17.680 "method": "nvmf_create_subsystem", 00:11:17.680 "params": { 00:11:17.680 "nqn": "nqn.2016-06.io.spdk:cnode10264", 00:11:17.680 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:11:17.680 } 00:11:17.680 } 00:11:17.680 Got JSON-RPC error response 00:11:17.680 GoRPCClient: error on JSON-RPC call' 00:11:17.680 19:11:55 -- target/invalid.sh@46 -- # [[ 2024/02/14 19:11:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode10264 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:11:17.680 request: 00:11:17.680 { 00:11:17.680 "method": "nvmf_create_subsystem", 00:11:17.680 "params": { 00:11:17.680 "nqn": "nqn.2016-06.io.spdk:cnode10264", 00:11:17.680 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:11:17.680 } 00:11:17.680 } 00:11:17.680 Got JSON-RPC error response 00:11:17.680 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:17.680 19:11:55 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:17.680 19:11:55 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode20662 00:11:17.939 [2024-02-14 19:11:55.276836] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20662: invalid model number 'SPDK_Controller' 00:11:17.939 19:11:55 -- target/invalid.sh@50 -- # out='2024/02/14 19:11:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode20662], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:11:17.939 request: 00:11:17.939 { 00:11:17.939 "method": "nvmf_create_subsystem", 00:11:17.939 "params": { 00:11:17.939 "nqn": "nqn.2016-06.io.spdk:cnode20662", 00:11:17.939 "model_number": "SPDK_Controller\u001f" 00:11:17.939 } 00:11:17.939 } 00:11:17.940 Got JSON-RPC error response 00:11:17.940 GoRPCClient: error on JSON-RPC call' 00:11:17.940 19:11:55 -- target/invalid.sh@51 -- # [[ 2024/02/14 19:11:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode20662], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:11:17.940 request: 00:11:17.940 { 00:11:17.940 "method": "nvmf_create_subsystem", 00:11:17.940 "params": { 00:11:17.940 "nqn": "nqn.2016-06.io.spdk:cnode20662", 00:11:17.940 "model_number": "SPDK_Controller\u001f" 00:11:17.940 } 00:11:17.940 } 00:11:17.940 Got JSON-RPC error response 00:11:17.940 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:17.940 19:11:55 -- target/invalid.sh@54 -- # gen_random_s 21 00:11:17.940 19:11:55 -- target/invalid.sh@19 -- # local length=21 ll 00:11:17.940 19:11:55 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:17.940 19:11:55 -- target/invalid.sh@21 -- # local chars 00:11:17.940 19:11:55 -- target/invalid.sh@22 -- # local string 00:11:17.940 19:11:55 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:17.940 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.940 19:11:55 -- target/invalid.sh@25 -- # printf %x 101 00:11:17.940 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:17.940 19:11:55 -- target/invalid.sh@25 -- # string+=e 00:11:17.940 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.940 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.940 19:11:55 -- target/invalid.sh@25 -- # printf %x 50 00:11:17.940 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:17.940 19:11:55 -- target/invalid.sh@25 -- # string+=2 00:11:17.940 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.940 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.940 19:11:55 -- target/invalid.sh@25 -- # printf %x 94 00:11:17.940 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:17.940 19:11:55 -- target/invalid.sh@25 -- # string+='^' 00:11:17.940 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.940 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.940 19:11:55 -- target/invalid.sh@25 -- # printf %x 55 00:11:17.940 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:17.940 19:11:55 -- target/invalid.sh@25 -- # string+=7 00:11:17.940 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.940 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.940 19:11:55 -- target/invalid.sh@25 -- # printf %x 108 00:11:17.940 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:17.940 19:11:55 -- target/invalid.sh@25 -- # string+=l 00:11:17.940 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.940 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.940 19:11:55 -- target/invalid.sh@25 -- # printf %x 57 00:11:17.940 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:17.940 19:11:55 -- target/invalid.sh@25 -- # string+=9 00:11:17.940 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.940 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.940 19:11:55 -- target/invalid.sh@25 -- # printf %x 54 00:11:17.940 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:17.940 19:11:55 -- target/invalid.sh@25 -- # string+=6 00:11:17.940 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.940 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.940 19:11:55 -- target/invalid.sh@25 -- # printf %x 73 00:11:17.940 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x49' 00:11:17.940 19:11:55 -- target/invalid.sh@25 -- # string+=I 00:11:17.940 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.940 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.940 19:11:55 -- target/invalid.sh@25 -- # printf %x 41 00:11:17.940 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:17.940 19:11:55 -- target/invalid.sh@25 -- # string+=')' 00:11:17.940 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.940 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.940 19:11:55 -- target/invalid.sh@25 -- # printf %x 51 00:11:17.940 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x33' 00:11:17.940 19:11:55 -- target/invalid.sh@25 -- # string+=3 00:11:17.940 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.940 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # printf %x 114 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # string+=r 00:11:18.199 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.199 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # printf %x 50 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # string+=2 00:11:18.199 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.199 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # printf %x 57 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # string+=9 00:11:18.199 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.199 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # printf %x 96 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # string+='`' 00:11:18.199 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.199 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # printf %x 42 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # string+='*' 00:11:18.199 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.199 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # printf %x 92 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # string+='\' 00:11:18.199 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.199 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # printf %x 122 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # string+=z 00:11:18.199 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.199 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # printf %x 59 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # string+=';' 00:11:18.199 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.199 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # printf %x 76 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # string+=L 00:11:18.199 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.199 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # printf %x 46 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # string+=. 00:11:18.199 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.199 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # printf %x 41 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:18.199 19:11:55 -- target/invalid.sh@25 -- # string+=')' 00:11:18.199 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.199 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.199 19:11:55 -- target/invalid.sh@28 -- # [[ e == \- ]] 00:11:18.199 19:11:55 -- target/invalid.sh@31 -- # echo 'e2^7l96I)3r29`*\z;L.)' 00:11:18.199 19:11:55 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'e2^7l96I)3r29`*\z;L.)' nqn.2016-06.io.spdk:cnode6400 00:11:18.459 [2024-02-14 19:11:55.637345] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6400: invalid serial number 'e2^7l96I)3r29`*\z;L.)' 00:11:18.459 19:11:55 -- target/invalid.sh@54 -- # out='2024/02/14 19:11:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode6400 serial_number:e2^7l96I)3r29`*\z;L.)], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN e2^7l96I)3r29`*\z;L.) 00:11:18.459 request: 00:11:18.459 { 00:11:18.459 "method": "nvmf_create_subsystem", 00:11:18.459 "params": { 00:11:18.459 "nqn": "nqn.2016-06.io.spdk:cnode6400", 00:11:18.459 "serial_number": "e2^7l96I)3r29`*\\z;L.)" 00:11:18.459 } 00:11:18.459 } 00:11:18.459 Got JSON-RPC error response 00:11:18.459 GoRPCClient: error on JSON-RPC call' 00:11:18.459 19:11:55 -- target/invalid.sh@55 -- # [[ 2024/02/14 19:11:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode6400 serial_number:e2^7l96I)3r29`*\z;L.)], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN e2^7l96I)3r29`*\z;L.) 00:11:18.459 request: 00:11:18.459 { 00:11:18.459 "method": "nvmf_create_subsystem", 00:11:18.459 "params": { 00:11:18.459 "nqn": "nqn.2016-06.io.spdk:cnode6400", 00:11:18.459 "serial_number": "e2^7l96I)3r29`*\\z;L.)" 00:11:18.459 } 00:11:18.459 } 00:11:18.459 Got JSON-RPC error response 00:11:18.459 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:18.459 19:11:55 -- target/invalid.sh@58 -- # gen_random_s 41 00:11:18.459 19:11:55 -- target/invalid.sh@19 -- # local length=41 ll 00:11:18.459 19:11:55 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:18.459 19:11:55 -- target/invalid.sh@21 -- # local chars 00:11:18.459 19:11:55 -- target/invalid.sh@22 -- # local string 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # printf %x 109 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # string+=m 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # printf %x 90 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # string+=Z 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # printf %x 74 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # string+=J 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # printf %x 76 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # string+=L 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # printf %x 122 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # string+=z 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # printf %x 44 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # string+=, 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # printf %x 70 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x46' 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # string+=F 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # printf %x 56 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # string+=8 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # printf %x 94 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # string+='^' 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # printf %x 103 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x67' 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # string+=g 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # printf %x 34 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # string+='"' 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # printf %x 36 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x24' 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # string+='$' 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # printf %x 77 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # string+=M 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # printf %x 70 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x46' 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # string+=F 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # printf %x 84 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # string+=T 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # printf %x 125 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # string+='}' 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # printf %x 102 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # string+=f 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # printf %x 79 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # string+=O 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # printf %x 81 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # string+=Q 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # printf %x 79 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # string+=O 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # printf %x 124 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # string+='|' 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # printf %x 86 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x56' 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # string+=V 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # printf %x 39 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # string+=\' 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.459 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # printf %x 84 00:11:18.459 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # string+=T 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # printf %x 62 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # string+='>' 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # printf %x 116 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # string+=t 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # printf %x 82 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x52' 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # string+=R 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # printf %x 59 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # string+=';' 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # printf %x 46 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # string+=. 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # printf %x 93 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # string+=']' 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # printf %x 123 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # string+='{' 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # printf %x 60 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # string+='<' 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # printf %x 93 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # string+=']' 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # printf %x 106 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # string+=j 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # printf %x 32 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # string+=' ' 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # printf %x 109 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # string+=m 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # printf %x 93 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # string+=']' 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # printf %x 97 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x61' 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # string+=a 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # printf %x 63 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # string+='?' 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # printf %x 80 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # string+=P 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # printf %x 51 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # echo -e '\x33' 00:11:18.460 19:11:55 -- target/invalid.sh@25 -- # string+=3 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:18.460 19:11:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:18.460 19:11:55 -- target/invalid.sh@28 -- # [[ m == \- ]] 00:11:18.460 19:11:55 -- target/invalid.sh@31 -- # echo 'mZJLz,F8^g"$MFT}fOQO|V'\''T>tR;.]{<]j m]a?P3' 00:11:18.460 19:11:55 -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'mZJLz,F8^g"$MFT}fOQO|V'\''T>tR;.]{<]j m]a?P3' nqn.2016-06.io.spdk:cnode19972 00:11:18.719 [2024-02-14 19:11:56.101904] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19972: invalid model number 'mZJLz,F8^g"$MFT}fOQO|V'T>tR;.]{<]j m]a?P3' 00:11:18.719 19:11:56 -- target/invalid.sh@58 -- # out='2024/02/14 19:11:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:mZJLz,F8^g"$MFT}fOQO|V'\''T>tR;.]{<]j m]a?P3 nqn:nqn.2016-06.io.spdk:cnode19972], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN mZJLz,F8^g"$MFT}fOQO|V'\''T>tR;.]{<]j m]a?P3 00:11:18.719 request: 00:11:18.719 { 00:11:18.719 "method": "nvmf_create_subsystem", 00:11:18.719 "params": { 00:11:18.719 "nqn": "nqn.2016-06.io.spdk:cnode19972", 00:11:18.719 "model_number": "mZJLz,F8^g\"$MFT}fOQO|V'\''T>tR;.]{<]j m]a?P3" 00:11:18.719 } 00:11:18.719 } 00:11:18.719 Got JSON-RPC error response 00:11:18.719 GoRPCClient: error on JSON-RPC call' 00:11:18.719 19:11:56 -- target/invalid.sh@59 -- # [[ 2024/02/14 19:11:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:mZJLz,F8^g"$MFT}fOQO|V'T>tR;.]{<]j m]a?P3 nqn:nqn.2016-06.io.spdk:cnode19972], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN mZJLz,F8^g"$MFT}fOQO|V'T>tR;.]{<]j m]a?P3 00:11:18.719 request: 00:11:18.719 { 00:11:18.719 "method": "nvmf_create_subsystem", 00:11:18.719 "params": { 00:11:18.719 "nqn": "nqn.2016-06.io.spdk:cnode19972", 00:11:18.719 "model_number": "mZJLz,F8^g\"$MFT}fOQO|V'T>tR;.]{<]j m]a?P3" 00:11:18.719 } 00:11:18.719 } 00:11:18.719 Got JSON-RPC error response 00:11:18.719 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:18.719 19:11:56 -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:18.977 [2024-02-14 19:11:56.378315] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.236 19:11:56 -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:19.494 19:11:56 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:19.494 19:11:56 -- target/invalid.sh@67 -- # echo '' 00:11:19.494 19:11:56 -- target/invalid.sh@67 -- # head -n 1 00:11:19.494 19:11:56 -- target/invalid.sh@67 -- # IP= 00:11:19.494 19:11:56 -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:19.752 [2024-02-14 19:11:56.983617] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:19.752 19:11:57 -- target/invalid.sh@69 -- # out='2024/02/14 19:11:56 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:11:19.752 request: 00:11:19.752 { 00:11:19.752 "method": "nvmf_subsystem_remove_listener", 00:11:19.752 "params": { 00:11:19.752 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:19.752 "listen_address": { 00:11:19.752 "trtype": "tcp", 00:11:19.752 "traddr": "", 00:11:19.752 "trsvcid": "4421" 00:11:19.752 } 00:11:19.752 } 00:11:19.752 } 00:11:19.752 Got JSON-RPC error response 00:11:19.752 GoRPCClient: error on JSON-RPC call' 00:11:19.752 19:11:57 -- target/invalid.sh@70 -- # [[ 2024/02/14 19:11:56 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:11:19.752 request: 00:11:19.752 { 00:11:19.752 "method": "nvmf_subsystem_remove_listener", 00:11:19.752 "params": { 00:11:19.752 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:19.752 "listen_address": { 00:11:19.752 "trtype": "tcp", 00:11:19.752 "traddr": "", 00:11:19.752 "trsvcid": "4421" 00:11:19.752 } 00:11:19.752 } 00:11:19.752 } 00:11:19.752 Got JSON-RPC error response 00:11:19.752 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:19.752 19:11:57 -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9448 -i 0 00:11:20.011 [2024-02-14 19:11:57.263950] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9448: invalid cntlid range [0-65519] 00:11:20.011 19:11:57 -- target/invalid.sh@73 -- # out='2024/02/14 19:11:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode9448], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:11:20.011 request: 00:11:20.011 { 00:11:20.011 "method": "nvmf_create_subsystem", 00:11:20.011 "params": { 00:11:20.011 "nqn": "nqn.2016-06.io.spdk:cnode9448", 00:11:20.011 "min_cntlid": 0 00:11:20.011 } 00:11:20.011 } 00:11:20.011 Got JSON-RPC error response 00:11:20.011 GoRPCClient: error on JSON-RPC call' 00:11:20.011 19:11:57 -- target/invalid.sh@74 -- # [[ 2024/02/14 19:11:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode9448], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:11:20.011 request: 00:11:20.011 { 00:11:20.011 "method": "nvmf_create_subsystem", 00:11:20.011 "params": { 00:11:20.011 "nqn": "nqn.2016-06.io.spdk:cnode9448", 00:11:20.011 "min_cntlid": 0 00:11:20.011 } 00:11:20.011 } 00:11:20.011 Got JSON-RPC error response 00:11:20.011 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:20.011 19:11:57 -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16520 -i 65520 00:11:20.269 [2024-02-14 19:11:57.536321] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16520: invalid cntlid range [65520-65519] 00:11:20.269 19:11:57 -- target/invalid.sh@75 -- # out='2024/02/14 19:11:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode16520], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:11:20.269 request: 00:11:20.269 { 00:11:20.269 "method": "nvmf_create_subsystem", 00:11:20.269 "params": { 00:11:20.269 "nqn": "nqn.2016-06.io.spdk:cnode16520", 00:11:20.269 "min_cntlid": 65520 00:11:20.269 } 00:11:20.269 } 00:11:20.269 Got JSON-RPC error response 00:11:20.269 GoRPCClient: error on JSON-RPC call' 00:11:20.269 19:11:57 -- target/invalid.sh@76 -- # [[ 2024/02/14 19:11:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode16520], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:11:20.269 request: 00:11:20.269 { 00:11:20.269 "method": "nvmf_create_subsystem", 00:11:20.269 "params": { 00:11:20.269 "nqn": "nqn.2016-06.io.spdk:cnode16520", 00:11:20.269 "min_cntlid": 65520 00:11:20.269 } 00:11:20.269 } 00:11:20.269 Got JSON-RPC error response 00:11:20.269 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:20.269 19:11:57 -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4075 -I 0 00:11:20.528 [2024-02-14 19:11:57.768715] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4075: invalid cntlid range [1-0] 00:11:20.528 19:11:57 -- target/invalid.sh@77 -- # out='2024/02/14 19:11:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode4075], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:11:20.528 request: 00:11:20.528 { 00:11:20.528 "method": "nvmf_create_subsystem", 00:11:20.528 "params": { 00:11:20.528 "nqn": "nqn.2016-06.io.spdk:cnode4075", 00:11:20.528 "max_cntlid": 0 00:11:20.528 } 00:11:20.528 } 00:11:20.528 Got JSON-RPC error response 00:11:20.528 GoRPCClient: error on JSON-RPC call' 00:11:20.528 19:11:57 -- target/invalid.sh@78 -- # [[ 2024/02/14 19:11:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode4075], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:11:20.528 request: 00:11:20.528 { 00:11:20.528 "method": "nvmf_create_subsystem", 00:11:20.528 "params": { 00:11:20.528 "nqn": "nqn.2016-06.io.spdk:cnode4075", 00:11:20.528 "max_cntlid": 0 00:11:20.528 } 00:11:20.528 } 00:11:20.528 Got JSON-RPC error response 00:11:20.528 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:20.528 19:11:57 -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14953 -I 65520 00:11:20.786 [2024-02-14 19:11:58.005028] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14953: invalid cntlid range [1-65520] 00:11:20.786 19:11:58 -- target/invalid.sh@79 -- # out='2024/02/14 19:11:58 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode14953], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:11:20.786 request: 00:11:20.786 { 00:11:20.786 "method": "nvmf_create_subsystem", 00:11:20.786 "params": { 00:11:20.786 "nqn": "nqn.2016-06.io.spdk:cnode14953", 00:11:20.786 "max_cntlid": 65520 00:11:20.786 } 00:11:20.786 } 00:11:20.786 Got JSON-RPC error response 00:11:20.786 GoRPCClient: error on JSON-RPC call' 00:11:20.786 19:11:58 -- target/invalid.sh@80 -- # [[ 2024/02/14 19:11:58 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode14953], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:11:20.786 request: 00:11:20.786 { 00:11:20.786 "method": "nvmf_create_subsystem", 00:11:20.786 "params": { 00:11:20.786 "nqn": "nqn.2016-06.io.spdk:cnode14953", 00:11:20.786 "max_cntlid": 65520 00:11:20.786 } 00:11:20.786 } 00:11:20.786 Got JSON-RPC error response 00:11:20.786 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:20.786 19:11:58 -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20359 -i 6 -I 5 00:11:21.045 [2024-02-14 19:11:58.277436] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20359: invalid cntlid range [6-5] 00:11:21.045 19:11:58 -- target/invalid.sh@83 -- # out='2024/02/14 19:11:58 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode20359], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:11:21.045 request: 00:11:21.045 { 00:11:21.045 "method": "nvmf_create_subsystem", 00:11:21.045 "params": { 00:11:21.045 "nqn": "nqn.2016-06.io.spdk:cnode20359", 00:11:21.045 "min_cntlid": 6, 00:11:21.045 "max_cntlid": 5 00:11:21.045 } 00:11:21.045 } 00:11:21.045 Got JSON-RPC error response 00:11:21.045 GoRPCClient: error on JSON-RPC call' 00:11:21.045 19:11:58 -- target/invalid.sh@84 -- # [[ 2024/02/14 19:11:58 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode20359], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:11:21.045 request: 00:11:21.045 { 00:11:21.045 "method": "nvmf_create_subsystem", 00:11:21.045 "params": { 00:11:21.045 "nqn": "nqn.2016-06.io.spdk:cnode20359", 00:11:21.045 "min_cntlid": 6, 00:11:21.045 "max_cntlid": 5 00:11:21.045 } 00:11:21.045 } 00:11:21.045 Got JSON-RPC error response 00:11:21.045 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:21.045 19:11:58 -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:21.045 19:11:58 -- target/invalid.sh@87 -- # out='request: 00:11:21.045 { 00:11:21.045 "name": "foobar", 00:11:21.045 "method": "nvmf_delete_target", 00:11:21.045 "req_id": 1 00:11:21.045 } 00:11:21.045 Got JSON-RPC error response 00:11:21.045 response: 00:11:21.045 { 00:11:21.045 "code": -32602, 00:11:21.045 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:21.045 }' 00:11:21.045 19:11:58 -- target/invalid.sh@88 -- # [[ request: 00:11:21.045 { 00:11:21.045 "name": "foobar", 00:11:21.045 "method": "nvmf_delete_target", 00:11:21.045 "req_id": 1 00:11:21.045 } 00:11:21.045 Got JSON-RPC error response 00:11:21.045 response: 00:11:21.045 { 00:11:21.045 "code": -32602, 00:11:21.045 "message": "The specified target doesn't exist, cannot delete it." 00:11:21.045 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:21.045 19:11:58 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:21.045 19:11:58 -- target/invalid.sh@91 -- # nvmftestfini 00:11:21.045 19:11:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:21.045 19:11:58 -- nvmf/common.sh@116 -- # sync 00:11:21.045 19:11:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:21.045 19:11:58 -- nvmf/common.sh@119 -- # set +e 00:11:21.045 19:11:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:21.045 19:11:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:21.045 rmmod nvme_tcp 00:11:21.045 rmmod nvme_fabrics 00:11:21.314 rmmod nvme_keyring 00:11:21.314 19:11:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:21.314 19:11:58 -- nvmf/common.sh@123 -- # set -e 00:11:21.314 19:11:58 -- nvmf/common.sh@124 -- # return 0 00:11:21.314 19:11:58 -- nvmf/common.sh@477 -- # '[' -n 66555 ']' 00:11:21.314 19:11:58 -- nvmf/common.sh@478 -- # killprocess 66555 00:11:21.314 19:11:58 -- common/autotest_common.sh@924 -- # '[' -z 66555 ']' 00:11:21.314 19:11:58 -- common/autotest_common.sh@928 -- # kill -0 66555 00:11:21.314 19:11:58 -- common/autotest_common.sh@929 -- # uname 00:11:21.314 19:11:58 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:11:21.314 19:11:58 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 66555 00:11:21.314 19:11:58 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:11:21.314 19:11:58 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:11:21.314 19:11:58 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 66555' 00:11:21.314 killing process with pid 66555 00:11:21.314 19:11:58 -- common/autotest_common.sh@943 -- # kill 66555 00:11:21.314 19:11:58 -- common/autotest_common.sh@948 -- # wait 66555 00:11:21.572 19:11:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:21.572 19:11:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:21.572 19:11:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:21.572 19:11:58 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:21.572 19:11:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:21.572 19:11:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.572 19:11:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:21.572 19:11:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.572 19:11:58 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:21.572 00:11:21.572 real 0m5.940s 00:11:21.572 user 0m23.334s 00:11:21.572 sys 0m1.330s 00:11:21.572 19:11:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:21.572 19:11:58 -- common/autotest_common.sh@10 -- # set +x 00:11:21.573 ************************************ 00:11:21.573 END TEST nvmf_invalid 00:11:21.573 ************************************ 00:11:21.573 19:11:58 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:21.573 19:11:58 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:11:21.573 19:11:58 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:21.573 19:11:58 -- common/autotest_common.sh@10 -- # set +x 00:11:21.573 ************************************ 00:11:21.573 START TEST nvmf_abort 00:11:21.573 ************************************ 00:11:21.573 19:11:58 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:21.832 * Looking for test storage... 00:11:21.832 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:21.832 19:11:59 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:21.832 19:11:59 -- nvmf/common.sh@7 -- # uname -s 00:11:21.832 19:11:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.832 19:11:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.832 19:11:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.832 19:11:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.832 19:11:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.832 19:11:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.832 19:11:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.832 19:11:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.832 19:11:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.832 19:11:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.832 19:11:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:11:21.832 19:11:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:11:21.832 19:11:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.832 19:11:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.832 19:11:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:21.832 19:11:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:21.832 19:11:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.832 19:11:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.832 19:11:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.832 19:11:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.832 19:11:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.832 19:11:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.832 19:11:59 -- paths/export.sh@5 -- # export PATH 00:11:21.832 19:11:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.832 19:11:59 -- nvmf/common.sh@46 -- # : 0 00:11:21.832 19:11:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:21.832 19:11:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:21.832 19:11:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:21.832 19:11:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.832 19:11:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.832 19:11:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:21.832 19:11:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:21.832 19:11:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:21.832 19:11:59 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:21.832 19:11:59 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:11:21.832 19:11:59 -- target/abort.sh@14 -- # nvmftestinit 00:11:21.832 19:11:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:21.832 19:11:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.832 19:11:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:21.832 19:11:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:21.832 19:11:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:21.832 19:11:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.832 19:11:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:21.832 19:11:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.832 19:11:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:21.832 19:11:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:21.832 19:11:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:21.832 19:11:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:21.832 19:11:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:21.832 19:11:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:21.832 19:11:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.832 19:11:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.832 19:11:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:21.832 19:11:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:21.832 19:11:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:21.832 19:11:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:21.832 19:11:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:21.832 19:11:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.832 19:11:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:21.832 19:11:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:21.832 19:11:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:21.832 19:11:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:21.832 19:11:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:21.832 19:11:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:21.832 Cannot find device "nvmf_tgt_br" 00:11:21.832 19:11:59 -- nvmf/common.sh@154 -- # true 00:11:21.832 19:11:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:21.832 Cannot find device "nvmf_tgt_br2" 00:11:21.832 19:11:59 -- nvmf/common.sh@155 -- # true 00:11:21.832 19:11:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:21.833 19:11:59 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:21.833 Cannot find device "nvmf_tgt_br" 00:11:21.833 19:11:59 -- nvmf/common.sh@157 -- # true 00:11:21.833 19:11:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:21.833 Cannot find device "nvmf_tgt_br2" 00:11:21.833 19:11:59 -- nvmf/common.sh@158 -- # true 00:11:21.833 19:11:59 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:21.833 19:11:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:21.833 19:11:59 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:21.833 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:21.833 19:11:59 -- nvmf/common.sh@161 -- # true 00:11:21.833 19:11:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:21.833 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:21.833 19:11:59 -- nvmf/common.sh@162 -- # true 00:11:21.833 19:11:59 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:21.833 19:11:59 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:21.833 19:11:59 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:21.833 19:11:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:21.833 19:11:59 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:21.833 19:11:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:22.092 19:11:59 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:22.092 19:11:59 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:22.092 19:11:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:22.092 19:11:59 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:22.092 19:11:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:22.092 19:11:59 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:22.092 19:11:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:22.092 19:11:59 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:22.092 19:11:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:22.092 19:11:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:22.092 19:11:59 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:22.092 19:11:59 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:22.092 19:11:59 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:22.092 19:11:59 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:22.092 19:11:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:22.092 19:11:59 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:22.092 19:11:59 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:22.092 19:11:59 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:22.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:22.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:11:22.092 00:11:22.092 --- 10.0.0.2 ping statistics --- 00:11:22.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.092 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:22.092 19:11:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:22.092 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:22.092 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:11:22.092 00:11:22.092 --- 10.0.0.3 ping statistics --- 00:11:22.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.092 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:11:22.092 19:11:59 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:22.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:22.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.598 ms 00:11:22.092 00:11:22.092 --- 10.0.0.1 ping statistics --- 00:11:22.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.092 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:11:22.092 19:11:59 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:22.092 19:11:59 -- nvmf/common.sh@421 -- # return 0 00:11:22.092 19:11:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:22.092 19:11:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:22.092 19:11:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:22.092 19:11:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:22.092 19:11:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:22.092 19:11:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:22.092 19:11:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:22.092 19:11:59 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:11:22.092 19:11:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:22.092 19:11:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:22.092 19:11:59 -- common/autotest_common.sh@10 -- # set +x 00:11:22.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.092 19:11:59 -- nvmf/common.sh@469 -- # nvmfpid=67067 00:11:22.092 19:11:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:22.092 19:11:59 -- nvmf/common.sh@470 -- # waitforlisten 67067 00:11:22.092 19:11:59 -- common/autotest_common.sh@817 -- # '[' -z 67067 ']' 00:11:22.092 19:11:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.092 19:11:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:22.092 19:11:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.092 19:11:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:22.092 19:11:59 -- common/autotest_common.sh@10 -- # set +x 00:11:22.092 [2024-02-14 19:11:59.494922] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:11:22.092 [2024-02-14 19:11:59.495324] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.351 [2024-02-14 19:11:59.640445] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:22.609 [2024-02-14 19:11:59.792192] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:22.609 [2024-02-14 19:11:59.792696] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.609 [2024-02-14 19:11:59.792853] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.609 [2024-02-14 19:11:59.792979] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.609 [2024-02-14 19:11:59.793374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:22.609 [2024-02-14 19:11:59.793542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:22.609 [2024-02-14 19:11:59.793547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.177 19:12:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:23.177 19:12:00 -- common/autotest_common.sh@850 -- # return 0 00:11:23.177 19:12:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:23.177 19:12:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:23.177 19:12:00 -- common/autotest_common.sh@10 -- # set +x 00:11:23.177 19:12:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:23.177 19:12:00 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:11:23.177 19:12:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:23.177 19:12:00 -- common/autotest_common.sh@10 -- # set +x 00:11:23.177 [2024-02-14 19:12:00.529479] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:23.177 19:12:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:23.177 19:12:00 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:11:23.177 19:12:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:23.177 19:12:00 -- common/autotest_common.sh@10 -- # set +x 00:11:23.177 Malloc0 00:11:23.177 19:12:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:23.177 19:12:00 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:23.177 19:12:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:23.177 19:12:00 -- common/autotest_common.sh@10 -- # set +x 00:11:23.177 Delay0 00:11:23.436 19:12:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:23.436 19:12:00 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:23.436 19:12:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:23.436 19:12:00 -- common/autotest_common.sh@10 -- # set +x 00:11:23.436 19:12:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:23.436 19:12:00 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:11:23.436 19:12:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:23.436 19:12:00 -- common/autotest_common.sh@10 -- # set +x 00:11:23.436 19:12:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:23.436 19:12:00 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:23.436 19:12:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:23.436 19:12:00 -- common/autotest_common.sh@10 -- # set +x 00:11:23.436 [2024-02-14 19:12:00.615333] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:23.436 19:12:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:23.436 19:12:00 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:23.436 19:12:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:23.436 19:12:00 -- common/autotest_common.sh@10 -- # set +x 00:11:23.436 19:12:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:23.436 19:12:00 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:11:23.436 [2024-02-14 19:12:00.801315] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:25.967 Initializing NVMe Controllers 00:11:25.967 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:25.967 controller IO queue size 128 less than required 00:11:25.967 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:11:25.967 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:11:25.967 Initialization complete. Launching workers. 00:11:25.967 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32894 00:11:25.967 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32955, failed to submit 62 00:11:25.967 success 32894, unsuccess 61, failed 0 00:11:25.967 19:12:02 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:25.967 19:12:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:25.967 19:12:02 -- common/autotest_common.sh@10 -- # set +x 00:11:25.967 19:12:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:25.967 19:12:02 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:25.967 19:12:02 -- target/abort.sh@38 -- # nvmftestfini 00:11:25.967 19:12:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:25.967 19:12:02 -- nvmf/common.sh@116 -- # sync 00:11:25.967 19:12:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:25.967 19:12:02 -- nvmf/common.sh@119 -- # set +e 00:11:25.967 19:12:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:25.967 19:12:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:25.967 rmmod nvme_tcp 00:11:25.967 rmmod nvme_fabrics 00:11:25.967 rmmod nvme_keyring 00:11:25.967 19:12:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:25.967 19:12:02 -- nvmf/common.sh@123 -- # set -e 00:11:25.967 19:12:02 -- nvmf/common.sh@124 -- # return 0 00:11:25.967 19:12:02 -- nvmf/common.sh@477 -- # '[' -n 67067 ']' 00:11:25.967 19:12:02 -- nvmf/common.sh@478 -- # killprocess 67067 00:11:25.967 19:12:02 -- common/autotest_common.sh@924 -- # '[' -z 67067 ']' 00:11:25.967 19:12:02 -- common/autotest_common.sh@928 -- # kill -0 67067 00:11:25.967 19:12:02 -- common/autotest_common.sh@929 -- # uname 00:11:25.967 19:12:02 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:11:25.967 19:12:02 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 67067 00:11:25.967 19:12:02 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:11:25.967 19:12:02 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:11:25.967 killing process with pid 67067 00:11:25.967 19:12:02 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 67067' 00:11:25.967 19:12:02 -- common/autotest_common.sh@943 -- # kill 67067 00:11:25.967 19:12:02 -- common/autotest_common.sh@948 -- # wait 67067 00:11:25.967 19:12:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:25.967 19:12:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:25.967 19:12:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:25.967 19:12:03 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:25.967 19:12:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:25.967 19:12:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.967 19:12:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:25.967 19:12:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.967 19:12:03 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:25.967 00:11:25.967 real 0m4.416s 00:11:25.967 user 0m12.345s 00:11:25.967 sys 0m1.093s 00:11:25.967 19:12:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:25.967 19:12:03 -- common/autotest_common.sh@10 -- # set +x 00:11:25.967 ************************************ 00:11:25.967 END TEST nvmf_abort 00:11:25.967 ************************************ 00:11:26.226 19:12:03 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:26.226 19:12:03 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:11:26.226 19:12:03 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:26.226 19:12:03 -- common/autotest_common.sh@10 -- # set +x 00:11:26.226 ************************************ 00:11:26.226 START TEST nvmf_ns_hotplug_stress 00:11:26.226 ************************************ 00:11:26.226 19:12:03 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:26.226 * Looking for test storage... 00:11:26.226 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:26.226 19:12:03 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:26.226 19:12:03 -- nvmf/common.sh@7 -- # uname -s 00:11:26.226 19:12:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.226 19:12:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.226 19:12:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.226 19:12:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.226 19:12:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.226 19:12:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.226 19:12:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.226 19:12:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.226 19:12:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.226 19:12:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.226 19:12:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:11:26.226 19:12:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:11:26.226 19:12:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.226 19:12:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.226 19:12:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:26.226 19:12:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:26.226 19:12:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.226 19:12:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.226 19:12:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.226 19:12:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.226 19:12:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.226 19:12:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.226 19:12:03 -- paths/export.sh@5 -- # export PATH 00:11:26.226 19:12:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.226 19:12:03 -- nvmf/common.sh@46 -- # : 0 00:11:26.226 19:12:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:26.226 19:12:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:26.226 19:12:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:26.226 19:12:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.226 19:12:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.226 19:12:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:26.226 19:12:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:26.226 19:12:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:26.226 19:12:03 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:26.226 19:12:03 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:11:26.226 19:12:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:26.226 19:12:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.226 19:12:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:26.226 19:12:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:26.226 19:12:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:26.226 19:12:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.226 19:12:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:26.226 19:12:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.226 19:12:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:26.226 19:12:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:26.226 19:12:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:26.226 19:12:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:26.226 19:12:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:26.226 19:12:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:26.226 19:12:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.226 19:12:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:26.226 19:12:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:26.226 19:12:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:26.226 19:12:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:26.226 19:12:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:26.226 19:12:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:26.226 19:12:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.226 19:12:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:26.226 19:12:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:26.226 19:12:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:26.226 19:12:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:26.226 19:12:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:26.226 19:12:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:26.226 Cannot find device "nvmf_tgt_br" 00:11:26.226 19:12:03 -- nvmf/common.sh@154 -- # true 00:11:26.226 19:12:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:26.226 Cannot find device "nvmf_tgt_br2" 00:11:26.226 19:12:03 -- nvmf/common.sh@155 -- # true 00:11:26.226 19:12:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:26.226 19:12:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:26.226 Cannot find device "nvmf_tgt_br" 00:11:26.226 19:12:03 -- nvmf/common.sh@157 -- # true 00:11:26.226 19:12:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:26.226 Cannot find device "nvmf_tgt_br2" 00:11:26.226 19:12:03 -- nvmf/common.sh@158 -- # true 00:11:26.226 19:12:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:26.226 19:12:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:26.485 19:12:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:26.485 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:26.485 19:12:03 -- nvmf/common.sh@161 -- # true 00:11:26.485 19:12:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:26.485 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:26.485 19:12:03 -- nvmf/common.sh@162 -- # true 00:11:26.485 19:12:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:26.485 19:12:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:26.485 19:12:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:26.485 19:12:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:26.485 19:12:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:26.485 19:12:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:26.485 19:12:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:26.485 19:12:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:26.485 19:12:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:26.485 19:12:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:26.485 19:12:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:26.485 19:12:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:26.485 19:12:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:26.485 19:12:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:26.485 19:12:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:26.485 19:12:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:26.485 19:12:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:26.485 19:12:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:26.485 19:12:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:26.485 19:12:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:26.485 19:12:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:26.485 19:12:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:26.485 19:12:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:26.485 19:12:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:26.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:11:26.485 00:11:26.485 --- 10.0.0.2 ping statistics --- 00:11:26.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.485 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:11:26.485 19:12:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:26.485 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:26.485 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:11:26.485 00:11:26.485 --- 10.0.0.3 ping statistics --- 00:11:26.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.485 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:11:26.485 19:12:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:26.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:11:26.485 00:11:26.485 --- 10.0.0.1 ping statistics --- 00:11:26.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.485 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:11:26.485 19:12:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.485 19:12:03 -- nvmf/common.sh@421 -- # return 0 00:11:26.485 19:12:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:26.485 19:12:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.485 19:12:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:26.485 19:12:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:26.485 19:12:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.485 19:12:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:26.485 19:12:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:26.485 19:12:03 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:11:26.485 19:12:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:26.485 19:12:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:26.485 19:12:03 -- common/autotest_common.sh@10 -- # set +x 00:11:26.485 19:12:03 -- nvmf/common.sh@469 -- # nvmfpid=67331 00:11:26.485 19:12:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:26.485 19:12:03 -- nvmf/common.sh@470 -- # waitforlisten 67331 00:11:26.485 19:12:03 -- common/autotest_common.sh@817 -- # '[' -z 67331 ']' 00:11:26.485 19:12:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.485 19:12:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:26.485 19:12:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.485 19:12:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:26.485 19:12:03 -- common/autotest_common.sh@10 -- # set +x 00:11:26.744 [2024-02-14 19:12:03.925740] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:11:26.744 [2024-02-14 19:12:03.925834] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.744 [2024-02-14 19:12:04.061684] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:27.002 [2024-02-14 19:12:04.197914] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:27.002 [2024-02-14 19:12:04.198368] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.002 [2024-02-14 19:12:04.198495] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.002 [2024-02-14 19:12:04.198593] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.002 [2024-02-14 19:12:04.199164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.002 [2024-02-14 19:12:04.199318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:27.002 [2024-02-14 19:12:04.199325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.568 19:12:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:27.568 19:12:04 -- common/autotest_common.sh@850 -- # return 0 00:11:27.568 19:12:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:27.568 19:12:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:27.568 19:12:04 -- common/autotest_common.sh@10 -- # set +x 00:11:27.568 19:12:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:27.568 19:12:04 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:11:27.568 19:12:04 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:27.825 [2024-02-14 19:12:05.193516] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:27.825 19:12:05 -- target/ns_hotplug_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:28.084 19:12:05 -- target/ns_hotplug_stress.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.342 [2024-02-14 19:12:05.686131] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.342 19:12:05 -- target/ns_hotplug_stress.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:28.600 19:12:05 -- target/ns_hotplug_stress.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:29.166 Malloc0 00:11:29.166 19:12:06 -- target/ns_hotplug_stress.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:29.166 Delay0 00:11:29.166 19:12:06 -- target/ns_hotplug_stress.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:29.424 19:12:06 -- target/ns_hotplug_stress.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:11:29.682 NULL1 00:11:29.682 19:12:07 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:29.940 19:12:07 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=67469 00:11:29.940 19:12:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:11:29.940 19:12:07 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:11:29.940 19:12:07 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.317 Read completed with error (sct=0, sc=11) 00:11:31.317 19:12:08 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:31.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:31.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:31.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:31.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:31.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:31.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:31.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:31.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:31.576 19:12:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:11:31.576 19:12:08 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:11:31.834 true 00:11:31.834 19:12:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:11:31.834 19:12:09 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.770 19:12:09 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:32.770 19:12:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:11:32.770 19:12:10 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:11:33.028 true 00:11:33.028 19:12:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:11:33.028 19:12:10 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:33.286 19:12:10 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:33.547 19:12:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:11:33.547 19:12:10 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:11:33.809 true 00:11:33.809 19:12:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:11:33.809 19:12:11 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.746 19:12:11 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:34.746 19:12:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:11:34.746 19:12:12 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:11:35.005 true 00:11:35.005 19:12:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:11:35.005 19:12:12 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.264 19:12:12 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:35.522 19:12:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:11:35.522 19:12:12 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:11:35.781 true 00:11:35.781 19:12:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:11:35.781 19:12:13 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:36.716 19:12:13 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:36.716 19:12:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:11:36.716 19:12:14 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:11:36.974 true 00:11:36.974 19:12:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:11:36.974 19:12:14 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.232 19:12:14 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:37.491 19:12:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:11:37.491 19:12:14 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:11:37.749 true 00:11:37.749 19:12:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:11:37.749 19:12:15 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:38.684 19:12:15 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:38.943 19:12:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:11:38.943 19:12:16 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:11:38.943 true 00:11:38.943 19:12:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:11:38.943 19:12:16 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:39.201 19:12:16 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:39.459 19:12:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:11:39.459 19:12:16 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:11:39.718 true 00:11:39.718 19:12:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:11:39.718 19:12:17 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.653 19:12:17 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:40.911 19:12:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:11:40.911 19:12:18 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:11:40.911 true 00:11:41.170 19:12:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:11:41.170 19:12:18 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:41.170 19:12:18 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:41.429 19:12:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:11:41.429 19:12:18 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:11:41.687 true 00:11:41.687 19:12:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:11:41.687 19:12:19 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:42.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:42.623 19:12:19 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:42.882 19:12:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:11:42.882 19:12:20 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:11:43.140 true 00:11:43.140 19:12:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:11:43.140 19:12:20 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:43.399 19:12:20 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:43.658 19:12:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:11:43.658 19:12:20 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:11:43.658 true 00:11:43.916 19:12:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:11:43.916 19:12:21 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.851 19:12:21 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:44.851 19:12:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:11:44.851 19:12:22 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:11:45.110 true 00:11:45.110 19:12:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:11:45.110 19:12:22 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:45.369 19:12:22 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:45.627 19:12:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:11:45.627 19:12:22 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:11:45.886 true 00:11:45.886 19:12:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:11:45.886 19:12:23 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.821 19:12:23 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:46.821 19:12:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:11:46.821 19:12:24 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:11:47.079 true 00:11:47.079 19:12:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:11:47.079 19:12:24 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:47.337 19:12:24 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:47.595 19:12:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:11:47.595 19:12:24 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:11:47.854 true 00:11:47.854 19:12:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:11:47.854 19:12:25 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:48.788 19:12:25 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:48.788 19:12:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:11:48.788 19:12:26 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:11:49.045 true 00:11:49.046 19:12:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:11:49.046 19:12:26 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.304 19:12:26 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:49.561 19:12:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:11:49.562 19:12:26 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:11:49.820 true 00:11:49.820 19:12:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:11:49.820 19:12:27 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.782 19:12:27 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:51.051 19:12:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:11:51.051 19:12:28 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:11:51.051 true 00:11:51.051 19:12:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:11:51.051 19:12:28 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:51.310 19:12:28 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:51.568 19:12:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:11:51.568 19:12:28 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:11:51.827 true 00:11:51.827 19:12:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:11:51.827 19:12:29 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:52.762 19:12:29 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:53.020 19:12:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:11:53.020 19:12:30 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:11:53.020 true 00:11:53.020 19:12:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:11:53.020 19:12:30 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:53.587 19:12:30 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:53.587 19:12:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:11:53.587 19:12:30 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:11:53.845 true 00:11:53.845 19:12:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:11:53.845 19:12:31 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:54.780 19:12:31 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:55.039 19:12:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:11:55.039 19:12:32 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:11:55.298 true 00:11:55.298 19:12:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:11:55.298 19:12:32 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:55.298 19:12:32 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:55.556 19:12:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:11:55.556 19:12:32 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:11:55.815 true 00:11:55.815 19:12:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:11:55.815 19:12:33 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.750 19:12:34 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:57.009 19:12:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:11:57.009 19:12:34 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:11:57.268 true 00:11:57.268 19:12:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:11:57.268 19:12:34 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.527 19:12:34 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:57.527 19:12:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:11:57.527 19:12:34 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:11:57.786 true 00:11:57.786 19:12:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:11:57.786 19:12:35 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.721 19:12:36 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:58.721 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:58.979 19:12:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:11:58.979 19:12:36 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:11:59.238 true 00:11:59.238 19:12:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:11:59.238 19:12:36 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.496 19:12:36 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:59.756 19:12:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:11:59.756 19:12:36 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:12:00.014 true 00:12:00.014 19:12:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:12:00.014 19:12:37 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.951 Initializing NVMe Controllers 00:12:00.951 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:00.951 Controller IO queue size 128, less than required. 00:12:00.951 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:00.951 Controller IO queue size 128, less than required. 00:12:00.951 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:00.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:00.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:00.951 Initialization complete. Launching workers. 00:12:00.951 ======================================================== 00:12:00.951 Latency(us) 00:12:00.951 Device Information : IOPS MiB/s Average min max 00:12:00.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 401.97 0.20 183106.63 3264.88 1174196.03 00:12:00.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11998.72 5.86 10667.53 2466.84 621717.33 00:12:00.951 ======================================================== 00:12:00.951 Total : 12400.69 6.06 16257.12 2466.84 1174196.03 00:12:00.951 00:12:00.951 19:12:38 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:00.951 19:12:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:12:00.951 19:12:38 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:12:01.210 true 00:12:01.210 19:12:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67469 00:12:01.210 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (67469) - No such process 00:12:01.210 19:12:38 -- target/ns_hotplug_stress.sh@44 -- # wait 67469 00:12:01.210 19:12:38 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:01.210 19:12:38 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:12:01.210 19:12:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:01.210 19:12:38 -- nvmf/common.sh@116 -- # sync 00:12:01.210 19:12:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:01.210 19:12:38 -- nvmf/common.sh@119 -- # set +e 00:12:01.210 19:12:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:01.210 19:12:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:01.210 rmmod nvme_tcp 00:12:01.210 rmmod nvme_fabrics 00:12:01.210 rmmod nvme_keyring 00:12:01.210 19:12:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:01.210 19:12:38 -- nvmf/common.sh@123 -- # set -e 00:12:01.210 19:12:38 -- nvmf/common.sh@124 -- # return 0 00:12:01.210 19:12:38 -- nvmf/common.sh@477 -- # '[' -n 67331 ']' 00:12:01.210 19:12:38 -- nvmf/common.sh@478 -- # killprocess 67331 00:12:01.210 19:12:38 -- common/autotest_common.sh@924 -- # '[' -z 67331 ']' 00:12:01.210 19:12:38 -- common/autotest_common.sh@928 -- # kill -0 67331 00:12:01.210 19:12:38 -- common/autotest_common.sh@929 -- # uname 00:12:01.210 19:12:38 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:12:01.210 19:12:38 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 67331 00:12:01.210 killing process with pid 67331 00:12:01.210 19:12:38 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:12:01.210 19:12:38 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:12:01.210 19:12:38 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 67331' 00:12:01.210 19:12:38 -- common/autotest_common.sh@943 -- # kill 67331 00:12:01.210 19:12:38 -- common/autotest_common.sh@948 -- # wait 67331 00:12:01.778 19:12:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:01.778 19:12:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:01.778 19:12:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:01.778 19:12:38 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:01.778 19:12:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:01.778 19:12:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.778 19:12:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:01.778 19:12:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.778 19:12:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:01.778 00:12:01.778 real 0m35.590s 00:12:01.778 user 2m29.934s 00:12:01.778 sys 0m8.183s 00:12:01.778 19:12:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:01.778 19:12:39 -- common/autotest_common.sh@10 -- # set +x 00:12:01.778 ************************************ 00:12:01.778 END TEST nvmf_ns_hotplug_stress 00:12:01.778 ************************************ 00:12:01.778 19:12:39 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:01.778 19:12:39 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:12:01.778 19:12:39 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:01.778 19:12:39 -- common/autotest_common.sh@10 -- # set +x 00:12:01.778 ************************************ 00:12:01.778 START TEST nvmf_connect_stress 00:12:01.778 ************************************ 00:12:01.778 19:12:39 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:01.778 * Looking for test storage... 00:12:01.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:01.778 19:12:39 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:01.778 19:12:39 -- nvmf/common.sh@7 -- # uname -s 00:12:01.778 19:12:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.778 19:12:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.779 19:12:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.779 19:12:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.779 19:12:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.779 19:12:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.779 19:12:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.779 19:12:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.779 19:12:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.779 19:12:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.779 19:12:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:12:01.779 19:12:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:12:01.779 19:12:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.779 19:12:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.779 19:12:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:01.779 19:12:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:01.779 19:12:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.779 19:12:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.779 19:12:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.779 19:12:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.779 19:12:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.779 19:12:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.779 19:12:39 -- paths/export.sh@5 -- # export PATH 00:12:01.779 19:12:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.779 19:12:39 -- nvmf/common.sh@46 -- # : 0 00:12:01.779 19:12:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:01.779 19:12:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:01.779 19:12:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:01.779 19:12:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.779 19:12:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.779 19:12:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:01.779 19:12:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:01.779 19:12:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:01.779 19:12:39 -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:01.779 19:12:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:01.779 19:12:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.779 19:12:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:01.779 19:12:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:01.779 19:12:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:01.779 19:12:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.779 19:12:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:01.779 19:12:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.779 19:12:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:01.779 19:12:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:01.779 19:12:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:01.779 19:12:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:01.779 19:12:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:01.779 19:12:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:01.779 19:12:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:01.779 19:12:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:01.779 19:12:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:01.779 19:12:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:01.779 19:12:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:01.779 19:12:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:01.779 19:12:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:01.779 19:12:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:01.779 19:12:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:01.779 19:12:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:01.779 19:12:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:01.779 19:12:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:01.779 19:12:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:02.038 19:12:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:02.038 Cannot find device "nvmf_tgt_br" 00:12:02.038 19:12:39 -- nvmf/common.sh@154 -- # true 00:12:02.038 19:12:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:02.038 Cannot find device "nvmf_tgt_br2" 00:12:02.038 19:12:39 -- nvmf/common.sh@155 -- # true 00:12:02.038 19:12:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:02.038 19:12:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:02.038 Cannot find device "nvmf_tgt_br" 00:12:02.038 19:12:39 -- nvmf/common.sh@157 -- # true 00:12:02.038 19:12:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:02.038 Cannot find device "nvmf_tgt_br2" 00:12:02.038 19:12:39 -- nvmf/common.sh@158 -- # true 00:12:02.038 19:12:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:02.038 19:12:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:02.038 19:12:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:02.038 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:02.038 19:12:39 -- nvmf/common.sh@161 -- # true 00:12:02.038 19:12:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:02.038 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:02.038 19:12:39 -- nvmf/common.sh@162 -- # true 00:12:02.038 19:12:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:02.038 19:12:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:02.038 19:12:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:02.038 19:12:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:02.038 19:12:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:02.038 19:12:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:02.038 19:12:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:02.038 19:12:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:02.038 19:12:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:02.038 19:12:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:02.038 19:12:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:02.038 19:12:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:02.038 19:12:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:02.038 19:12:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:02.038 19:12:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:02.038 19:12:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:02.038 19:12:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:02.038 19:12:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:02.038 19:12:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:02.297 19:12:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:02.297 19:12:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:02.297 19:12:39 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:02.297 19:12:39 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:02.297 19:12:39 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:02.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:02.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:12:02.297 00:12:02.297 --- 10.0.0.2 ping statistics --- 00:12:02.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.297 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:12:02.297 19:12:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:02.297 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:02.297 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:12:02.297 00:12:02.297 --- 10.0.0.3 ping statistics --- 00:12:02.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.297 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:12:02.297 19:12:39 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:02.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:02.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:12:02.297 00:12:02.297 --- 10.0.0.1 ping statistics --- 00:12:02.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.297 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:12:02.297 19:12:39 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:02.297 19:12:39 -- nvmf/common.sh@421 -- # return 0 00:12:02.297 19:12:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:02.297 19:12:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:02.297 19:12:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:02.297 19:12:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:02.297 19:12:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:02.297 19:12:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:02.297 19:12:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:02.297 19:12:39 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:02.297 19:12:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:02.297 19:12:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:02.297 19:12:39 -- common/autotest_common.sh@10 -- # set +x 00:12:02.297 19:12:39 -- nvmf/common.sh@469 -- # nvmfpid=68617 00:12:02.297 19:12:39 -- nvmf/common.sh@470 -- # waitforlisten 68617 00:12:02.297 19:12:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:02.297 19:12:39 -- common/autotest_common.sh@817 -- # '[' -z 68617 ']' 00:12:02.297 19:12:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.297 19:12:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:02.297 19:12:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.297 19:12:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:02.297 19:12:39 -- common/autotest_common.sh@10 -- # set +x 00:12:02.297 [2024-02-14 19:12:39.604005] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:12:02.297 [2024-02-14 19:12:39.604103] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.556 [2024-02-14 19:12:39.743326] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:02.556 [2024-02-14 19:12:39.891131] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:02.556 [2024-02-14 19:12:39.891311] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:02.556 [2024-02-14 19:12:39.891324] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:02.556 [2024-02-14 19:12:39.891334] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:02.556 [2024-02-14 19:12:39.891554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.556 [2024-02-14 19:12:39.891680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:02.556 [2024-02-14 19:12:39.891688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.492 19:12:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:03.492 19:12:40 -- common/autotest_common.sh@850 -- # return 0 00:12:03.492 19:12:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:03.492 19:12:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:03.492 19:12:40 -- common/autotest_common.sh@10 -- # set +x 00:12:03.492 19:12:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.492 19:12:40 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:03.492 19:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:03.492 19:12:40 -- common/autotest_common.sh@10 -- # set +x 00:12:03.492 [2024-02-14 19:12:40.642638] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:03.492 19:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:03.492 19:12:40 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:03.492 19:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:03.492 19:12:40 -- common/autotest_common.sh@10 -- # set +x 00:12:03.492 19:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:03.492 19:12:40 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.492 19:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:03.492 19:12:40 -- common/autotest_common.sh@10 -- # set +x 00:12:03.492 [2024-02-14 19:12:40.662789] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.492 19:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:03.492 19:12:40 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:03.492 19:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:03.492 19:12:40 -- common/autotest_common.sh@10 -- # set +x 00:12:03.492 NULL1 00:12:03.492 19:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:03.492 19:12:40 -- target/connect_stress.sh@21 -- # PERF_PID=68675 00:12:03.492 19:12:40 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:03.492 19:12:40 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:03.492 19:12:40 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:03.492 19:12:40 -- target/connect_stress.sh@27 -- # seq 1 20 00:12:03.492 19:12:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.492 19:12:40 -- target/connect_stress.sh@28 -- # cat 00:12:03.492 19:12:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.492 19:12:40 -- target/connect_stress.sh@28 -- # cat 00:12:03.492 19:12:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.492 19:12:40 -- target/connect_stress.sh@28 -- # cat 00:12:03.492 19:12:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.492 19:12:40 -- target/connect_stress.sh@28 -- # cat 00:12:03.492 19:12:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.492 19:12:40 -- target/connect_stress.sh@28 -- # cat 00:12:03.492 19:12:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.492 19:12:40 -- target/connect_stress.sh@28 -- # cat 00:12:03.492 19:12:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.492 19:12:40 -- target/connect_stress.sh@28 -- # cat 00:12:03.492 19:12:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.492 19:12:40 -- target/connect_stress.sh@28 -- # cat 00:12:03.492 19:12:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.492 19:12:40 -- target/connect_stress.sh@28 -- # cat 00:12:03.492 19:12:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.492 19:12:40 -- target/connect_stress.sh@28 -- # cat 00:12:03.492 19:12:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.492 19:12:40 -- target/connect_stress.sh@28 -- # cat 00:12:03.492 19:12:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.492 19:12:40 -- target/connect_stress.sh@28 -- # cat 00:12:03.492 19:12:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.492 19:12:40 -- target/connect_stress.sh@28 -- # cat 00:12:03.492 19:12:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.492 19:12:40 -- target/connect_stress.sh@28 -- # cat 00:12:03.492 19:12:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.492 19:12:40 -- target/connect_stress.sh@28 -- # cat 00:12:03.492 19:12:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.492 19:12:40 -- target/connect_stress.sh@28 -- # cat 00:12:03.492 19:12:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.492 19:12:40 -- target/connect_stress.sh@28 -- # cat 00:12:03.492 19:12:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.492 19:12:40 -- target/connect_stress.sh@28 -- # cat 00:12:03.492 19:12:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.492 19:12:40 -- target/connect_stress.sh@28 -- # cat 00:12:03.492 19:12:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.492 19:12:40 -- target/connect_stress.sh@28 -- # cat 00:12:03.492 19:12:40 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:03.492 19:12:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:03.492 19:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:03.492 19:12:40 -- common/autotest_common.sh@10 -- # set +x 00:12:03.751 19:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:03.751 19:12:41 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:03.751 19:12:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:03.751 19:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:03.751 19:12:41 -- common/autotest_common.sh@10 -- # set +x 00:12:04.009 19:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:04.009 19:12:41 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:04.009 19:12:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.009 19:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:04.009 19:12:41 -- common/autotest_common.sh@10 -- # set +x 00:12:04.577 19:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:04.577 19:12:41 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:04.577 19:12:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.577 19:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:04.577 19:12:41 -- common/autotest_common.sh@10 -- # set +x 00:12:04.835 19:12:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:04.835 19:12:42 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:04.835 19:12:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.835 19:12:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:04.835 19:12:42 -- common/autotest_common.sh@10 -- # set +x 00:12:05.094 19:12:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:05.094 19:12:42 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:05.094 19:12:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.094 19:12:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:05.094 19:12:42 -- common/autotest_common.sh@10 -- # set +x 00:12:05.352 19:12:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:05.352 19:12:42 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:05.352 19:12:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.352 19:12:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:05.352 19:12:42 -- common/autotest_common.sh@10 -- # set +x 00:12:05.918 19:12:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:05.918 19:12:43 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:05.918 19:12:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.918 19:12:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:05.918 19:12:43 -- common/autotest_common.sh@10 -- # set +x 00:12:06.176 19:12:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:06.176 19:12:43 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:06.176 19:12:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.176 19:12:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:06.176 19:12:43 -- common/autotest_common.sh@10 -- # set +x 00:12:06.434 19:12:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:06.434 19:12:43 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:06.434 19:12:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.434 19:12:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:06.434 19:12:43 -- common/autotest_common.sh@10 -- # set +x 00:12:06.693 19:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:06.693 19:12:44 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:06.693 19:12:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.693 19:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:06.693 19:12:44 -- common/autotest_common.sh@10 -- # set +x 00:12:06.950 19:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:06.950 19:12:44 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:06.950 19:12:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.950 19:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:06.950 19:12:44 -- common/autotest_common.sh@10 -- # set +x 00:12:07.515 19:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:07.515 19:12:44 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:07.515 19:12:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.515 19:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:07.515 19:12:44 -- common/autotest_common.sh@10 -- # set +x 00:12:07.773 19:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:07.773 19:12:44 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:07.773 19:12:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.773 19:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:07.773 19:12:44 -- common/autotest_common.sh@10 -- # set +x 00:12:08.031 19:12:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:08.031 19:12:45 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:08.031 19:12:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.031 19:12:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:08.031 19:12:45 -- common/autotest_common.sh@10 -- # set +x 00:12:08.288 19:12:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:08.288 19:12:45 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:08.288 19:12:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.288 19:12:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:08.288 19:12:45 -- common/autotest_common.sh@10 -- # set +x 00:12:08.546 19:12:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:08.546 19:12:45 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:08.546 19:12:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.546 19:12:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:08.546 19:12:45 -- common/autotest_common.sh@10 -- # set +x 00:12:09.112 19:12:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:09.112 19:12:46 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:09.112 19:12:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.112 19:12:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:09.112 19:12:46 -- common/autotest_common.sh@10 -- # set +x 00:12:09.369 19:12:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:09.369 19:12:46 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:09.369 19:12:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.369 19:12:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:09.369 19:12:46 -- common/autotest_common.sh@10 -- # set +x 00:12:09.627 19:12:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:09.627 19:12:46 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:09.627 19:12:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.627 19:12:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:09.627 19:12:46 -- common/autotest_common.sh@10 -- # set +x 00:12:09.886 19:12:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:09.886 19:12:47 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:09.886 19:12:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.886 19:12:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:09.886 19:12:47 -- common/autotest_common.sh@10 -- # set +x 00:12:10.453 19:12:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.453 19:12:47 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:10.453 19:12:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.453 19:12:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.453 19:12:47 -- common/autotest_common.sh@10 -- # set +x 00:12:10.730 19:12:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.730 19:12:47 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:10.730 19:12:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.730 19:12:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.730 19:12:47 -- common/autotest_common.sh@10 -- # set +x 00:12:11.009 19:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:11.009 19:12:48 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:11.009 19:12:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.009 19:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:11.009 19:12:48 -- common/autotest_common.sh@10 -- # set +x 00:12:11.267 19:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:11.267 19:12:48 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:11.267 19:12:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.267 19:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:11.267 19:12:48 -- common/autotest_common.sh@10 -- # set +x 00:12:11.526 19:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:11.526 19:12:48 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:11.526 19:12:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.526 19:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:11.526 19:12:48 -- common/autotest_common.sh@10 -- # set +x 00:12:11.784 19:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:11.784 19:12:49 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:11.784 19:12:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.784 19:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:11.784 19:12:49 -- common/autotest_common.sh@10 -- # set +x 00:12:12.351 19:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:12.351 19:12:49 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:12.351 19:12:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.351 19:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:12.351 19:12:49 -- common/autotest_common.sh@10 -- # set +x 00:12:12.609 19:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:12.609 19:12:49 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:12.609 19:12:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.609 19:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:12.609 19:12:49 -- common/autotest_common.sh@10 -- # set +x 00:12:12.866 19:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:12.866 19:12:50 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:12.866 19:12:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.866 19:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:12.866 19:12:50 -- common/autotest_common.sh@10 -- # set +x 00:12:13.125 19:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:13.125 19:12:50 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:13.125 19:12:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.125 19:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:13.125 19:12:50 -- common/autotest_common.sh@10 -- # set +x 00:12:13.692 19:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:13.692 19:12:50 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:13.692 19:12:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.692 19:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:13.692 19:12:50 -- common/autotest_common.sh@10 -- # set +x 00:12:13.692 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:13.951 19:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:13.951 19:12:51 -- target/connect_stress.sh@34 -- # kill -0 68675 00:12:13.951 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (68675) - No such process 00:12:13.951 19:12:51 -- target/connect_stress.sh@38 -- # wait 68675 00:12:13.951 19:12:51 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:13.951 19:12:51 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:13.951 19:12:51 -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:13.951 19:12:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:13.951 19:12:51 -- nvmf/common.sh@116 -- # sync 00:12:13.951 19:12:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:13.951 19:12:51 -- nvmf/common.sh@119 -- # set +e 00:12:13.951 19:12:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:13.951 19:12:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:13.951 rmmod nvme_tcp 00:12:13.951 rmmod nvme_fabrics 00:12:13.951 rmmod nvme_keyring 00:12:13.951 19:12:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:13.951 19:12:51 -- nvmf/common.sh@123 -- # set -e 00:12:13.951 19:12:51 -- nvmf/common.sh@124 -- # return 0 00:12:13.951 19:12:51 -- nvmf/common.sh@477 -- # '[' -n 68617 ']' 00:12:13.951 19:12:51 -- nvmf/common.sh@478 -- # killprocess 68617 00:12:13.951 19:12:51 -- common/autotest_common.sh@924 -- # '[' -z 68617 ']' 00:12:13.951 19:12:51 -- common/autotest_common.sh@928 -- # kill -0 68617 00:12:13.951 19:12:51 -- common/autotest_common.sh@929 -- # uname 00:12:13.951 19:12:51 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:12:13.951 19:12:51 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 68617 00:12:13.951 19:12:51 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:12:13.951 19:12:51 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:12:13.951 19:12:51 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 68617' 00:12:13.951 killing process with pid 68617 00:12:13.951 19:12:51 -- common/autotest_common.sh@943 -- # kill 68617 00:12:13.951 19:12:51 -- common/autotest_common.sh@948 -- # wait 68617 00:12:14.210 19:12:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:14.210 19:12:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:14.210 19:12:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:14.210 19:12:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:14.210 19:12:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:14.210 19:12:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.210 19:12:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:14.210 19:12:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.210 19:12:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:14.210 00:12:14.210 real 0m12.546s 00:12:14.210 user 0m41.487s 00:12:14.210 sys 0m3.199s 00:12:14.210 19:12:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:14.210 19:12:51 -- common/autotest_common.sh@10 -- # set +x 00:12:14.210 ************************************ 00:12:14.210 END TEST nvmf_connect_stress 00:12:14.210 ************************************ 00:12:14.469 19:12:51 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:14.469 19:12:51 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:12:14.469 19:12:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:14.469 19:12:51 -- common/autotest_common.sh@10 -- # set +x 00:12:14.469 ************************************ 00:12:14.469 START TEST nvmf_fused_ordering 00:12:14.469 ************************************ 00:12:14.469 19:12:51 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:14.469 * Looking for test storage... 00:12:14.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:14.469 19:12:51 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:14.469 19:12:51 -- nvmf/common.sh@7 -- # uname -s 00:12:14.469 19:12:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.469 19:12:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.469 19:12:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.469 19:12:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.469 19:12:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.469 19:12:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.469 19:12:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.469 19:12:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.469 19:12:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.469 19:12:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.469 19:12:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:12:14.469 19:12:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:12:14.469 19:12:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.469 19:12:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.469 19:12:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:14.469 19:12:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:14.469 19:12:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.469 19:12:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.469 19:12:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.469 19:12:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.469 19:12:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.470 19:12:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.470 19:12:51 -- paths/export.sh@5 -- # export PATH 00:12:14.470 19:12:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.470 19:12:51 -- nvmf/common.sh@46 -- # : 0 00:12:14.470 19:12:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:14.470 19:12:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:14.470 19:12:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:14.470 19:12:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.470 19:12:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.470 19:12:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:14.470 19:12:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:14.470 19:12:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:14.470 19:12:51 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:14.470 19:12:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:14.470 19:12:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.470 19:12:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:14.470 19:12:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:14.470 19:12:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:14.470 19:12:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.470 19:12:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:14.470 19:12:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.470 19:12:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:14.470 19:12:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:14.470 19:12:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:14.470 19:12:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:14.470 19:12:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:14.470 19:12:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:14.470 19:12:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:14.470 19:12:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:14.470 19:12:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:14.470 19:12:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:14.470 19:12:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:14.470 19:12:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:14.470 19:12:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:14.470 19:12:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:14.470 19:12:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:14.470 19:12:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:14.470 19:12:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:14.470 19:12:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:14.470 19:12:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:14.470 19:12:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:14.470 Cannot find device "nvmf_tgt_br" 00:12:14.470 19:12:51 -- nvmf/common.sh@154 -- # true 00:12:14.470 19:12:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:14.470 Cannot find device "nvmf_tgt_br2" 00:12:14.470 19:12:51 -- nvmf/common.sh@155 -- # true 00:12:14.470 19:12:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:14.470 19:12:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:14.470 Cannot find device "nvmf_tgt_br" 00:12:14.470 19:12:51 -- nvmf/common.sh@157 -- # true 00:12:14.470 19:12:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:14.470 Cannot find device "nvmf_tgt_br2" 00:12:14.470 19:12:51 -- nvmf/common.sh@158 -- # true 00:12:14.470 19:12:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:14.728 19:12:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:14.728 19:12:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:14.728 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:14.728 19:12:51 -- nvmf/common.sh@161 -- # true 00:12:14.728 19:12:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:14.728 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:14.728 19:12:51 -- nvmf/common.sh@162 -- # true 00:12:14.728 19:12:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:14.728 19:12:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:14.728 19:12:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:14.728 19:12:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:14.728 19:12:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:14.728 19:12:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:14.728 19:12:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:14.728 19:12:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:14.728 19:12:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:14.728 19:12:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:14.728 19:12:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:14.729 19:12:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:14.729 19:12:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:14.729 19:12:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:14.729 19:12:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:14.729 19:12:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:14.729 19:12:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:14.729 19:12:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:14.729 19:12:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:14.729 19:12:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:14.729 19:12:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:14.729 19:12:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:14.729 19:12:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:14.729 19:12:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:14.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:14.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:12:14.729 00:12:14.729 --- 10.0.0.2 ping statistics --- 00:12:14.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.729 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:12:14.729 19:12:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:14.729 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:14.729 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:12:14.729 00:12:14.729 --- 10.0.0.3 ping statistics --- 00:12:14.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.729 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:12:14.729 19:12:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:14.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:14.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:12:14.729 00:12:14.729 --- 10.0.0.1 ping statistics --- 00:12:14.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.729 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:12:14.729 19:12:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:14.729 19:12:52 -- nvmf/common.sh@421 -- # return 0 00:12:14.729 19:12:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:14.729 19:12:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:14.729 19:12:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:14.729 19:12:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:14.729 19:12:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:14.729 19:12:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:14.729 19:12:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:14.987 19:12:52 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:14.988 19:12:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:14.988 19:12:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:14.988 19:12:52 -- common/autotest_common.sh@10 -- # set +x 00:12:14.988 19:12:52 -- nvmf/common.sh@469 -- # nvmfpid=69004 00:12:14.988 19:12:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:14.988 19:12:52 -- nvmf/common.sh@470 -- # waitforlisten 69004 00:12:14.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.988 19:12:52 -- common/autotest_common.sh@817 -- # '[' -z 69004 ']' 00:12:14.988 19:12:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.988 19:12:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:14.988 19:12:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.988 19:12:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:14.988 19:12:52 -- common/autotest_common.sh@10 -- # set +x 00:12:14.988 [2024-02-14 19:12:52.223148] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:12:14.988 [2024-02-14 19:12:52.223537] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.988 [2024-02-14 19:12:52.360170] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.247 [2024-02-14 19:12:52.497068] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:15.247 [2024-02-14 19:12:52.497412] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.247 [2024-02-14 19:12:52.497434] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.247 [2024-02-14 19:12:52.497443] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.247 [2024-02-14 19:12:52.497483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.815 19:12:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:15.815 19:12:53 -- common/autotest_common.sh@850 -- # return 0 00:12:15.815 19:12:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:15.815 19:12:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:15.815 19:12:53 -- common/autotest_common.sh@10 -- # set +x 00:12:15.815 19:12:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.815 19:12:53 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:15.815 19:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:15.815 19:12:53 -- common/autotest_common.sh@10 -- # set +x 00:12:15.815 [2024-02-14 19:12:53.222974] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:15.815 19:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:15.815 19:12:53 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:15.815 19:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:15.815 19:12:53 -- common/autotest_common.sh@10 -- # set +x 00:12:16.074 19:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:16.074 19:12:53 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.074 19:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:16.074 19:12:53 -- common/autotest_common.sh@10 -- # set +x 00:12:16.074 [2024-02-14 19:12:53.247100] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.074 19:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:16.074 19:12:53 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:16.074 19:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:16.074 19:12:53 -- common/autotest_common.sh@10 -- # set +x 00:12:16.074 NULL1 00:12:16.074 19:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:16.074 19:12:53 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:16.074 19:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:16.074 19:12:53 -- common/autotest_common.sh@10 -- # set +x 00:12:16.074 19:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:16.074 19:12:53 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:16.074 19:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:16.074 19:12:53 -- common/autotest_common.sh@10 -- # set +x 00:12:16.074 19:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:16.074 19:12:53 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:16.074 [2024-02-14 19:12:53.310585] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:12:16.074 [2024-02-14 19:12:53.310626] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69054 ] 00:12:16.642 Attached to nqn.2016-06.io.spdk:cnode1 00:12:16.642 Namespace ID: 1 size: 1GB 00:12:16.642 fused_ordering(0) 00:12:16.642 fused_ordering(1) 00:12:16.642 fused_ordering(2) 00:12:16.642 fused_ordering(3) 00:12:16.642 fused_ordering(4) 00:12:16.642 fused_ordering(5) 00:12:16.642 fused_ordering(6) 00:12:16.642 fused_ordering(7) 00:12:16.642 fused_ordering(8) 00:12:16.642 fused_ordering(9) 00:12:16.642 fused_ordering(10) 00:12:16.642 fused_ordering(11) 00:12:16.642 fused_ordering(12) 00:12:16.642 fused_ordering(13) 00:12:16.642 fused_ordering(14) 00:12:16.642 fused_ordering(15) 00:12:16.642 fused_ordering(16) 00:12:16.642 fused_ordering(17) 00:12:16.642 fused_ordering(18) 00:12:16.642 fused_ordering(19) 00:12:16.642 fused_ordering(20) 00:12:16.642 fused_ordering(21) 00:12:16.642 fused_ordering(22) 00:12:16.642 fused_ordering(23) 00:12:16.642 fused_ordering(24) 00:12:16.642 fused_ordering(25) 00:12:16.642 fused_ordering(26) 00:12:16.642 fused_ordering(27) 00:12:16.642 fused_ordering(28) 00:12:16.642 fused_ordering(29) 00:12:16.642 fused_ordering(30) 00:12:16.642 fused_ordering(31) 00:12:16.642 fused_ordering(32) 00:12:16.642 fused_ordering(33) 00:12:16.642 fused_ordering(34) 00:12:16.642 fused_ordering(35) 00:12:16.642 fused_ordering(36) 00:12:16.642 fused_ordering(37) 00:12:16.642 fused_ordering(38) 00:12:16.642 fused_ordering(39) 00:12:16.642 fused_ordering(40) 00:12:16.642 fused_ordering(41) 00:12:16.642 fused_ordering(42) 00:12:16.642 fused_ordering(43) 00:12:16.642 fused_ordering(44) 00:12:16.642 fused_ordering(45) 00:12:16.642 fused_ordering(46) 00:12:16.642 fused_ordering(47) 00:12:16.642 fused_ordering(48) 00:12:16.642 fused_ordering(49) 00:12:16.642 fused_ordering(50) 00:12:16.642 fused_ordering(51) 00:12:16.642 fused_ordering(52) 00:12:16.642 fused_ordering(53) 00:12:16.642 fused_ordering(54) 00:12:16.642 fused_ordering(55) 00:12:16.642 fused_ordering(56) 00:12:16.642 fused_ordering(57) 00:12:16.642 fused_ordering(58) 00:12:16.642 fused_ordering(59) 00:12:16.642 fused_ordering(60) 00:12:16.642 fused_ordering(61) 00:12:16.642 fused_ordering(62) 00:12:16.642 fused_ordering(63) 00:12:16.642 fused_ordering(64) 00:12:16.642 fused_ordering(65) 00:12:16.642 fused_ordering(66) 00:12:16.642 fused_ordering(67) 00:12:16.642 fused_ordering(68) 00:12:16.642 fused_ordering(69) 00:12:16.642 fused_ordering(70) 00:12:16.642 fused_ordering(71) 00:12:16.642 fused_ordering(72) 00:12:16.642 fused_ordering(73) 00:12:16.642 fused_ordering(74) 00:12:16.642 fused_ordering(75) 00:12:16.642 fused_ordering(76) 00:12:16.642 fused_ordering(77) 00:12:16.642 fused_ordering(78) 00:12:16.642 fused_ordering(79) 00:12:16.642 fused_ordering(80) 00:12:16.642 fused_ordering(81) 00:12:16.642 fused_ordering(82) 00:12:16.642 fused_ordering(83) 00:12:16.642 fused_ordering(84) 00:12:16.642 fused_ordering(85) 00:12:16.642 fused_ordering(86) 00:12:16.642 fused_ordering(87) 00:12:16.642 fused_ordering(88) 00:12:16.642 fused_ordering(89) 00:12:16.642 fused_ordering(90) 00:12:16.642 fused_ordering(91) 00:12:16.642 fused_ordering(92) 00:12:16.642 fused_ordering(93) 00:12:16.642 fused_ordering(94) 00:12:16.642 fused_ordering(95) 00:12:16.642 fused_ordering(96) 00:12:16.642 fused_ordering(97) 00:12:16.642 fused_ordering(98) 00:12:16.643 fused_ordering(99) 00:12:16.643 fused_ordering(100) 00:12:16.643 fused_ordering(101) 00:12:16.643 fused_ordering(102) 00:12:16.643 fused_ordering(103) 00:12:16.643 fused_ordering(104) 00:12:16.643 fused_ordering(105) 00:12:16.643 fused_ordering(106) 00:12:16.643 fused_ordering(107) 00:12:16.643 fused_ordering(108) 00:12:16.643 fused_ordering(109) 00:12:16.643 fused_ordering(110) 00:12:16.643 fused_ordering(111) 00:12:16.643 fused_ordering(112) 00:12:16.643 fused_ordering(113) 00:12:16.643 fused_ordering(114) 00:12:16.643 fused_ordering(115) 00:12:16.643 fused_ordering(116) 00:12:16.643 fused_ordering(117) 00:12:16.643 fused_ordering(118) 00:12:16.643 fused_ordering(119) 00:12:16.643 fused_ordering(120) 00:12:16.643 fused_ordering(121) 00:12:16.643 fused_ordering(122) 00:12:16.643 fused_ordering(123) 00:12:16.643 fused_ordering(124) 00:12:16.643 fused_ordering(125) 00:12:16.643 fused_ordering(126) 00:12:16.643 fused_ordering(127) 00:12:16.643 fused_ordering(128) 00:12:16.643 fused_ordering(129) 00:12:16.643 fused_ordering(130) 00:12:16.643 fused_ordering(131) 00:12:16.643 fused_ordering(132) 00:12:16.643 fused_ordering(133) 00:12:16.643 fused_ordering(134) 00:12:16.643 fused_ordering(135) 00:12:16.643 fused_ordering(136) 00:12:16.643 fused_ordering(137) 00:12:16.643 fused_ordering(138) 00:12:16.643 fused_ordering(139) 00:12:16.643 fused_ordering(140) 00:12:16.643 fused_ordering(141) 00:12:16.643 fused_ordering(142) 00:12:16.643 fused_ordering(143) 00:12:16.643 fused_ordering(144) 00:12:16.643 fused_ordering(145) 00:12:16.643 fused_ordering(146) 00:12:16.643 fused_ordering(147) 00:12:16.643 fused_ordering(148) 00:12:16.643 fused_ordering(149) 00:12:16.643 fused_ordering(150) 00:12:16.643 fused_ordering(151) 00:12:16.643 fused_ordering(152) 00:12:16.643 fused_ordering(153) 00:12:16.643 fused_ordering(154) 00:12:16.643 fused_ordering(155) 00:12:16.643 fused_ordering(156) 00:12:16.643 fused_ordering(157) 00:12:16.643 fused_ordering(158) 00:12:16.643 fused_ordering(159) 00:12:16.643 fused_ordering(160) 00:12:16.643 fused_ordering(161) 00:12:16.643 fused_ordering(162) 00:12:16.643 fused_ordering(163) 00:12:16.643 fused_ordering(164) 00:12:16.643 fused_ordering(165) 00:12:16.643 fused_ordering(166) 00:12:16.643 fused_ordering(167) 00:12:16.643 fused_ordering(168) 00:12:16.643 fused_ordering(169) 00:12:16.643 fused_ordering(170) 00:12:16.643 fused_ordering(171) 00:12:16.643 fused_ordering(172) 00:12:16.643 fused_ordering(173) 00:12:16.643 fused_ordering(174) 00:12:16.643 fused_ordering(175) 00:12:16.643 fused_ordering(176) 00:12:16.643 fused_ordering(177) 00:12:16.643 fused_ordering(178) 00:12:16.643 fused_ordering(179) 00:12:16.643 fused_ordering(180) 00:12:16.643 fused_ordering(181) 00:12:16.643 fused_ordering(182) 00:12:16.643 fused_ordering(183) 00:12:16.643 fused_ordering(184) 00:12:16.643 fused_ordering(185) 00:12:16.643 fused_ordering(186) 00:12:16.643 fused_ordering(187) 00:12:16.643 fused_ordering(188) 00:12:16.643 fused_ordering(189) 00:12:16.643 fused_ordering(190) 00:12:16.643 fused_ordering(191) 00:12:16.643 fused_ordering(192) 00:12:16.643 fused_ordering(193) 00:12:16.643 fused_ordering(194) 00:12:16.643 fused_ordering(195) 00:12:16.643 fused_ordering(196) 00:12:16.643 fused_ordering(197) 00:12:16.643 fused_ordering(198) 00:12:16.643 fused_ordering(199) 00:12:16.643 fused_ordering(200) 00:12:16.643 fused_ordering(201) 00:12:16.643 fused_ordering(202) 00:12:16.643 fused_ordering(203) 00:12:16.643 fused_ordering(204) 00:12:16.643 fused_ordering(205) 00:12:16.643 fused_ordering(206) 00:12:16.643 fused_ordering(207) 00:12:16.643 fused_ordering(208) 00:12:16.643 fused_ordering(209) 00:12:16.643 fused_ordering(210) 00:12:16.643 fused_ordering(211) 00:12:16.643 fused_ordering(212) 00:12:16.643 fused_ordering(213) 00:12:16.643 fused_ordering(214) 00:12:16.643 fused_ordering(215) 00:12:16.643 fused_ordering(216) 00:12:16.643 fused_ordering(217) 00:12:16.643 fused_ordering(218) 00:12:16.643 fused_ordering(219) 00:12:16.643 fused_ordering(220) 00:12:16.643 fused_ordering(221) 00:12:16.643 fused_ordering(222) 00:12:16.643 fused_ordering(223) 00:12:16.643 fused_ordering(224) 00:12:16.643 fused_ordering(225) 00:12:16.643 fused_ordering(226) 00:12:16.643 fused_ordering(227) 00:12:16.643 fused_ordering(228) 00:12:16.643 fused_ordering(229) 00:12:16.643 fused_ordering(230) 00:12:16.643 fused_ordering(231) 00:12:16.643 fused_ordering(232) 00:12:16.643 fused_ordering(233) 00:12:16.643 fused_ordering(234) 00:12:16.643 fused_ordering(235) 00:12:16.643 fused_ordering(236) 00:12:16.643 fused_ordering(237) 00:12:16.643 fused_ordering(238) 00:12:16.643 fused_ordering(239) 00:12:16.643 fused_ordering(240) 00:12:16.643 fused_ordering(241) 00:12:16.643 fused_ordering(242) 00:12:16.643 fused_ordering(243) 00:12:16.643 fused_ordering(244) 00:12:16.643 fused_ordering(245) 00:12:16.643 fused_ordering(246) 00:12:16.643 fused_ordering(247) 00:12:16.643 fused_ordering(248) 00:12:16.643 fused_ordering(249) 00:12:16.643 fused_ordering(250) 00:12:16.643 fused_ordering(251) 00:12:16.643 fused_ordering(252) 00:12:16.643 fused_ordering(253) 00:12:16.643 fused_ordering(254) 00:12:16.643 fused_ordering(255) 00:12:16.643 fused_ordering(256) 00:12:16.643 fused_ordering(257) 00:12:16.643 fused_ordering(258) 00:12:16.643 fused_ordering(259) 00:12:16.643 fused_ordering(260) 00:12:16.643 fused_ordering(261) 00:12:16.643 fused_ordering(262) 00:12:16.643 fused_ordering(263) 00:12:16.643 fused_ordering(264) 00:12:16.643 fused_ordering(265) 00:12:16.643 fused_ordering(266) 00:12:16.643 fused_ordering(267) 00:12:16.643 fused_ordering(268) 00:12:16.643 fused_ordering(269) 00:12:16.643 fused_ordering(270) 00:12:16.643 fused_ordering(271) 00:12:16.643 fused_ordering(272) 00:12:16.643 fused_ordering(273) 00:12:16.643 fused_ordering(274) 00:12:16.643 fused_ordering(275) 00:12:16.643 fused_ordering(276) 00:12:16.643 fused_ordering(277) 00:12:16.643 fused_ordering(278) 00:12:16.643 fused_ordering(279) 00:12:16.643 fused_ordering(280) 00:12:16.643 fused_ordering(281) 00:12:16.643 fused_ordering(282) 00:12:16.643 fused_ordering(283) 00:12:16.643 fused_ordering(284) 00:12:16.643 fused_ordering(285) 00:12:16.643 fused_ordering(286) 00:12:16.643 fused_ordering(287) 00:12:16.643 fused_ordering(288) 00:12:16.643 fused_ordering(289) 00:12:16.643 fused_ordering(290) 00:12:16.643 fused_ordering(291) 00:12:16.643 fused_ordering(292) 00:12:16.643 fused_ordering(293) 00:12:16.643 fused_ordering(294) 00:12:16.643 fused_ordering(295) 00:12:16.643 fused_ordering(296) 00:12:16.643 fused_ordering(297) 00:12:16.643 fused_ordering(298) 00:12:16.643 fused_ordering(299) 00:12:16.643 fused_ordering(300) 00:12:16.643 fused_ordering(301) 00:12:16.643 fused_ordering(302) 00:12:16.643 fused_ordering(303) 00:12:16.643 fused_ordering(304) 00:12:16.643 fused_ordering(305) 00:12:16.643 fused_ordering(306) 00:12:16.643 fused_ordering(307) 00:12:16.643 fused_ordering(308) 00:12:16.643 fused_ordering(309) 00:12:16.643 fused_ordering(310) 00:12:16.643 fused_ordering(311) 00:12:16.643 fused_ordering(312) 00:12:16.643 fused_ordering(313) 00:12:16.643 fused_ordering(314) 00:12:16.643 fused_ordering(315) 00:12:16.643 fused_ordering(316) 00:12:16.643 fused_ordering(317) 00:12:16.643 fused_ordering(318) 00:12:16.643 fused_ordering(319) 00:12:16.643 fused_ordering(320) 00:12:16.643 fused_ordering(321) 00:12:16.643 fused_ordering(322) 00:12:16.643 fused_ordering(323) 00:12:16.643 fused_ordering(324) 00:12:16.643 fused_ordering(325) 00:12:16.643 fused_ordering(326) 00:12:16.643 fused_ordering(327) 00:12:16.643 fused_ordering(328) 00:12:16.643 fused_ordering(329) 00:12:16.643 fused_ordering(330) 00:12:16.643 fused_ordering(331) 00:12:16.643 fused_ordering(332) 00:12:16.643 fused_ordering(333) 00:12:16.643 fused_ordering(334) 00:12:16.643 fused_ordering(335) 00:12:16.643 fused_ordering(336) 00:12:16.643 fused_ordering(337) 00:12:16.643 fused_ordering(338) 00:12:16.643 fused_ordering(339) 00:12:16.643 fused_ordering(340) 00:12:16.643 fused_ordering(341) 00:12:16.643 fused_ordering(342) 00:12:16.643 fused_ordering(343) 00:12:16.643 fused_ordering(344) 00:12:16.643 fused_ordering(345) 00:12:16.643 fused_ordering(346) 00:12:16.643 fused_ordering(347) 00:12:16.643 fused_ordering(348) 00:12:16.643 fused_ordering(349) 00:12:16.643 fused_ordering(350) 00:12:16.643 fused_ordering(351) 00:12:16.643 fused_ordering(352) 00:12:16.643 fused_ordering(353) 00:12:16.643 fused_ordering(354) 00:12:16.643 fused_ordering(355) 00:12:16.643 fused_ordering(356) 00:12:16.643 fused_ordering(357) 00:12:16.643 fused_ordering(358) 00:12:16.643 fused_ordering(359) 00:12:16.643 fused_ordering(360) 00:12:16.643 fused_ordering(361) 00:12:16.643 fused_ordering(362) 00:12:16.643 fused_ordering(363) 00:12:16.643 fused_ordering(364) 00:12:16.643 fused_ordering(365) 00:12:16.643 fused_ordering(366) 00:12:16.643 fused_ordering(367) 00:12:16.643 fused_ordering(368) 00:12:16.643 fused_ordering(369) 00:12:16.644 fused_ordering(370) 00:12:16.644 fused_ordering(371) 00:12:16.644 fused_ordering(372) 00:12:16.644 fused_ordering(373) 00:12:16.644 fused_ordering(374) 00:12:16.644 fused_ordering(375) 00:12:16.644 fused_ordering(376) 00:12:16.644 fused_ordering(377) 00:12:16.644 fused_ordering(378) 00:12:16.644 fused_ordering(379) 00:12:16.644 fused_ordering(380) 00:12:16.644 fused_ordering(381) 00:12:16.644 fused_ordering(382) 00:12:16.644 fused_ordering(383) 00:12:16.644 fused_ordering(384) 00:12:16.644 fused_ordering(385) 00:12:16.644 fused_ordering(386) 00:12:16.644 fused_ordering(387) 00:12:16.644 fused_ordering(388) 00:12:16.644 fused_ordering(389) 00:12:16.644 fused_ordering(390) 00:12:16.644 fused_ordering(391) 00:12:16.644 fused_ordering(392) 00:12:16.644 fused_ordering(393) 00:12:16.644 fused_ordering(394) 00:12:16.644 fused_ordering(395) 00:12:16.644 fused_ordering(396) 00:12:16.644 fused_ordering(397) 00:12:16.644 fused_ordering(398) 00:12:16.644 fused_ordering(399) 00:12:16.644 fused_ordering(400) 00:12:16.644 fused_ordering(401) 00:12:16.644 fused_ordering(402) 00:12:16.644 fused_ordering(403) 00:12:16.644 fused_ordering(404) 00:12:16.644 fused_ordering(405) 00:12:16.644 fused_ordering(406) 00:12:16.644 fused_ordering(407) 00:12:16.644 fused_ordering(408) 00:12:16.644 fused_ordering(409) 00:12:16.644 fused_ordering(410) 00:12:17.211 fused_ordering(411) 00:12:17.211 fused_ordering(412) 00:12:17.211 fused_ordering(413) 00:12:17.211 fused_ordering(414) 00:12:17.211 fused_ordering(415) 00:12:17.211 fused_ordering(416) 00:12:17.211 fused_ordering(417) 00:12:17.211 fused_ordering(418) 00:12:17.211 fused_ordering(419) 00:12:17.211 fused_ordering(420) 00:12:17.211 fused_ordering(421) 00:12:17.211 fused_ordering(422) 00:12:17.211 fused_ordering(423) 00:12:17.211 fused_ordering(424) 00:12:17.211 fused_ordering(425) 00:12:17.211 fused_ordering(426) 00:12:17.211 fused_ordering(427) 00:12:17.211 fused_ordering(428) 00:12:17.211 fused_ordering(429) 00:12:17.211 fused_ordering(430) 00:12:17.211 fused_ordering(431) 00:12:17.211 fused_ordering(432) 00:12:17.211 fused_ordering(433) 00:12:17.211 fused_ordering(434) 00:12:17.211 fused_ordering(435) 00:12:17.211 fused_ordering(436) 00:12:17.211 fused_ordering(437) 00:12:17.211 fused_ordering(438) 00:12:17.211 fused_ordering(439) 00:12:17.211 fused_ordering(440) 00:12:17.211 fused_ordering(441) 00:12:17.211 fused_ordering(442) 00:12:17.211 fused_ordering(443) 00:12:17.211 fused_ordering(444) 00:12:17.211 fused_ordering(445) 00:12:17.211 fused_ordering(446) 00:12:17.211 fused_ordering(447) 00:12:17.211 fused_ordering(448) 00:12:17.211 fused_ordering(449) 00:12:17.211 fused_ordering(450) 00:12:17.211 fused_ordering(451) 00:12:17.211 fused_ordering(452) 00:12:17.211 fused_ordering(453) 00:12:17.211 fused_ordering(454) 00:12:17.211 fused_ordering(455) 00:12:17.211 fused_ordering(456) 00:12:17.211 fused_ordering(457) 00:12:17.211 fused_ordering(458) 00:12:17.211 fused_ordering(459) 00:12:17.211 fused_ordering(460) 00:12:17.211 fused_ordering(461) 00:12:17.211 fused_ordering(462) 00:12:17.211 fused_ordering(463) 00:12:17.211 fused_ordering(464) 00:12:17.211 fused_ordering(465) 00:12:17.211 fused_ordering(466) 00:12:17.211 fused_ordering(467) 00:12:17.211 fused_ordering(468) 00:12:17.211 fused_ordering(469) 00:12:17.211 fused_ordering(470) 00:12:17.211 fused_ordering(471) 00:12:17.211 fused_ordering(472) 00:12:17.211 fused_ordering(473) 00:12:17.211 fused_ordering(474) 00:12:17.211 fused_ordering(475) 00:12:17.211 fused_ordering(476) 00:12:17.211 fused_ordering(477) 00:12:17.211 fused_ordering(478) 00:12:17.211 fused_ordering(479) 00:12:17.211 fused_ordering(480) 00:12:17.212 fused_ordering(481) 00:12:17.212 fused_ordering(482) 00:12:17.212 fused_ordering(483) 00:12:17.212 fused_ordering(484) 00:12:17.212 fused_ordering(485) 00:12:17.212 fused_ordering(486) 00:12:17.212 fused_ordering(487) 00:12:17.212 fused_ordering(488) 00:12:17.212 fused_ordering(489) 00:12:17.212 fused_ordering(490) 00:12:17.212 fused_ordering(491) 00:12:17.212 fused_ordering(492) 00:12:17.212 fused_ordering(493) 00:12:17.212 fused_ordering(494) 00:12:17.212 fused_ordering(495) 00:12:17.212 fused_ordering(496) 00:12:17.212 fused_ordering(497) 00:12:17.212 fused_ordering(498) 00:12:17.212 fused_ordering(499) 00:12:17.212 fused_ordering(500) 00:12:17.212 fused_ordering(501) 00:12:17.212 fused_ordering(502) 00:12:17.212 fused_ordering(503) 00:12:17.212 fused_ordering(504) 00:12:17.212 fused_ordering(505) 00:12:17.212 fused_ordering(506) 00:12:17.212 fused_ordering(507) 00:12:17.212 fused_ordering(508) 00:12:17.212 fused_ordering(509) 00:12:17.212 fused_ordering(510) 00:12:17.212 fused_ordering(511) 00:12:17.212 fused_ordering(512) 00:12:17.212 fused_ordering(513) 00:12:17.212 fused_ordering(514) 00:12:17.212 fused_ordering(515) 00:12:17.212 fused_ordering(516) 00:12:17.212 fused_ordering(517) 00:12:17.212 fused_ordering(518) 00:12:17.212 fused_ordering(519) 00:12:17.212 fused_ordering(520) 00:12:17.212 fused_ordering(521) 00:12:17.212 fused_ordering(522) 00:12:17.212 fused_ordering(523) 00:12:17.212 fused_ordering(524) 00:12:17.212 fused_ordering(525) 00:12:17.212 fused_ordering(526) 00:12:17.212 fused_ordering(527) 00:12:17.212 fused_ordering(528) 00:12:17.212 fused_ordering(529) 00:12:17.212 fused_ordering(530) 00:12:17.212 fused_ordering(531) 00:12:17.212 fused_ordering(532) 00:12:17.212 fused_ordering(533) 00:12:17.212 fused_ordering(534) 00:12:17.212 fused_ordering(535) 00:12:17.212 fused_ordering(536) 00:12:17.212 fused_ordering(537) 00:12:17.212 fused_ordering(538) 00:12:17.212 fused_ordering(539) 00:12:17.212 fused_ordering(540) 00:12:17.212 fused_ordering(541) 00:12:17.212 fused_ordering(542) 00:12:17.212 fused_ordering(543) 00:12:17.212 fused_ordering(544) 00:12:17.212 fused_ordering(545) 00:12:17.212 fused_ordering(546) 00:12:17.212 fused_ordering(547) 00:12:17.212 fused_ordering(548) 00:12:17.212 fused_ordering(549) 00:12:17.212 fused_ordering(550) 00:12:17.212 fused_ordering(551) 00:12:17.212 fused_ordering(552) 00:12:17.212 fused_ordering(553) 00:12:17.212 fused_ordering(554) 00:12:17.212 fused_ordering(555) 00:12:17.212 fused_ordering(556) 00:12:17.212 fused_ordering(557) 00:12:17.212 fused_ordering(558) 00:12:17.212 fused_ordering(559) 00:12:17.212 fused_ordering(560) 00:12:17.212 fused_ordering(561) 00:12:17.212 fused_ordering(562) 00:12:17.212 fused_ordering(563) 00:12:17.212 fused_ordering(564) 00:12:17.212 fused_ordering(565) 00:12:17.212 fused_ordering(566) 00:12:17.212 fused_ordering(567) 00:12:17.212 fused_ordering(568) 00:12:17.212 fused_ordering(569) 00:12:17.212 fused_ordering(570) 00:12:17.212 fused_ordering(571) 00:12:17.212 fused_ordering(572) 00:12:17.212 fused_ordering(573) 00:12:17.212 fused_ordering(574) 00:12:17.212 fused_ordering(575) 00:12:17.212 fused_ordering(576) 00:12:17.212 fused_ordering(577) 00:12:17.212 fused_ordering(578) 00:12:17.212 fused_ordering(579) 00:12:17.212 fused_ordering(580) 00:12:17.212 fused_ordering(581) 00:12:17.212 fused_ordering(582) 00:12:17.212 fused_ordering(583) 00:12:17.212 fused_ordering(584) 00:12:17.212 fused_ordering(585) 00:12:17.212 fused_ordering(586) 00:12:17.212 fused_ordering(587) 00:12:17.212 fused_ordering(588) 00:12:17.212 fused_ordering(589) 00:12:17.212 fused_ordering(590) 00:12:17.212 fused_ordering(591) 00:12:17.212 fused_ordering(592) 00:12:17.212 fused_ordering(593) 00:12:17.212 fused_ordering(594) 00:12:17.212 fused_ordering(595) 00:12:17.212 fused_ordering(596) 00:12:17.212 fused_ordering(597) 00:12:17.212 fused_ordering(598) 00:12:17.212 fused_ordering(599) 00:12:17.212 fused_ordering(600) 00:12:17.212 fused_ordering(601) 00:12:17.212 fused_ordering(602) 00:12:17.212 fused_ordering(603) 00:12:17.212 fused_ordering(604) 00:12:17.212 fused_ordering(605) 00:12:17.212 fused_ordering(606) 00:12:17.212 fused_ordering(607) 00:12:17.212 fused_ordering(608) 00:12:17.212 fused_ordering(609) 00:12:17.212 fused_ordering(610) 00:12:17.212 fused_ordering(611) 00:12:17.212 fused_ordering(612) 00:12:17.212 fused_ordering(613) 00:12:17.212 fused_ordering(614) 00:12:17.212 fused_ordering(615) 00:12:17.779 fused_ordering(616) 00:12:17.779 fused_ordering(617) 00:12:17.779 fused_ordering(618) 00:12:17.779 fused_ordering(619) 00:12:17.779 fused_ordering(620) 00:12:17.779 fused_ordering(621) 00:12:17.779 fused_ordering(622) 00:12:17.779 fused_ordering(623) 00:12:17.779 fused_ordering(624) 00:12:17.779 fused_ordering(625) 00:12:17.779 fused_ordering(626) 00:12:17.779 fused_ordering(627) 00:12:17.779 fused_ordering(628) 00:12:17.779 fused_ordering(629) 00:12:17.779 fused_ordering(630) 00:12:17.779 fused_ordering(631) 00:12:17.779 fused_ordering(632) 00:12:17.779 fused_ordering(633) 00:12:17.779 fused_ordering(634) 00:12:17.779 fused_ordering(635) 00:12:17.779 fused_ordering(636) 00:12:17.779 fused_ordering(637) 00:12:17.779 fused_ordering(638) 00:12:17.779 fused_ordering(639) 00:12:17.779 fused_ordering(640) 00:12:17.779 fused_ordering(641) 00:12:17.779 fused_ordering(642) 00:12:17.780 fused_ordering(643) 00:12:17.780 fused_ordering(644) 00:12:17.780 fused_ordering(645) 00:12:17.780 fused_ordering(646) 00:12:17.780 fused_ordering(647) 00:12:17.780 fused_ordering(648) 00:12:17.780 fused_ordering(649) 00:12:17.780 fused_ordering(650) 00:12:17.780 fused_ordering(651) 00:12:17.780 fused_ordering(652) 00:12:17.780 fused_ordering(653) 00:12:17.780 fused_ordering(654) 00:12:17.780 fused_ordering(655) 00:12:17.780 fused_ordering(656) 00:12:17.780 fused_ordering(657) 00:12:17.780 fused_ordering(658) 00:12:17.780 fused_ordering(659) 00:12:17.780 fused_ordering(660) 00:12:17.780 fused_ordering(661) 00:12:17.780 fused_ordering(662) 00:12:17.780 fused_ordering(663) 00:12:17.780 fused_ordering(664) 00:12:17.780 fused_ordering(665) 00:12:17.780 fused_ordering(666) 00:12:17.780 fused_ordering(667) 00:12:17.780 fused_ordering(668) 00:12:17.780 fused_ordering(669) 00:12:17.780 fused_ordering(670) 00:12:17.780 fused_ordering(671) 00:12:17.780 fused_ordering(672) 00:12:17.780 fused_ordering(673) 00:12:17.780 fused_ordering(674) 00:12:17.780 fused_ordering(675) 00:12:17.780 fused_ordering(676) 00:12:17.780 fused_ordering(677) 00:12:17.780 fused_ordering(678) 00:12:17.780 fused_ordering(679) 00:12:17.780 fused_ordering(680) 00:12:17.780 fused_ordering(681) 00:12:17.780 fused_ordering(682) 00:12:17.780 fused_ordering(683) 00:12:17.780 fused_ordering(684) 00:12:17.780 fused_ordering(685) 00:12:17.780 fused_ordering(686) 00:12:17.780 fused_ordering(687) 00:12:17.780 fused_ordering(688) 00:12:17.780 fused_ordering(689) 00:12:17.780 fused_ordering(690) 00:12:17.780 fused_ordering(691) 00:12:17.780 fused_ordering(692) 00:12:17.780 fused_ordering(693) 00:12:17.780 fused_ordering(694) 00:12:17.780 fused_ordering(695) 00:12:17.780 fused_ordering(696) 00:12:17.780 fused_ordering(697) 00:12:17.780 fused_ordering(698) 00:12:17.780 fused_ordering(699) 00:12:17.780 fused_ordering(700) 00:12:17.780 fused_ordering(701) 00:12:17.780 fused_ordering(702) 00:12:17.780 fused_ordering(703) 00:12:17.780 fused_ordering(704) 00:12:17.780 fused_ordering(705) 00:12:17.780 fused_ordering(706) 00:12:17.780 fused_ordering(707) 00:12:17.780 fused_ordering(708) 00:12:17.780 fused_ordering(709) 00:12:17.780 fused_ordering(710) 00:12:17.780 fused_ordering(711) 00:12:17.780 fused_ordering(712) 00:12:17.780 fused_ordering(713) 00:12:17.780 fused_ordering(714) 00:12:17.780 fused_ordering(715) 00:12:17.780 fused_ordering(716) 00:12:17.780 fused_ordering(717) 00:12:17.780 fused_ordering(718) 00:12:17.780 fused_ordering(719) 00:12:17.780 fused_ordering(720) 00:12:17.780 fused_ordering(721) 00:12:17.780 fused_ordering(722) 00:12:17.780 fused_ordering(723) 00:12:17.780 fused_ordering(724) 00:12:17.780 fused_ordering(725) 00:12:17.780 fused_ordering(726) 00:12:17.780 fused_ordering(727) 00:12:17.780 fused_ordering(728) 00:12:17.780 fused_ordering(729) 00:12:17.780 fused_ordering(730) 00:12:17.780 fused_ordering(731) 00:12:17.780 fused_ordering(732) 00:12:17.780 fused_ordering(733) 00:12:17.780 fused_ordering(734) 00:12:17.780 fused_ordering(735) 00:12:17.780 fused_ordering(736) 00:12:17.780 fused_ordering(737) 00:12:17.780 fused_ordering(738) 00:12:17.780 fused_ordering(739) 00:12:17.780 fused_ordering(740) 00:12:17.780 fused_ordering(741) 00:12:17.780 fused_ordering(742) 00:12:17.780 fused_ordering(743) 00:12:17.780 fused_ordering(744) 00:12:17.780 fused_ordering(745) 00:12:17.780 fused_ordering(746) 00:12:17.780 fused_ordering(747) 00:12:17.780 fused_ordering(748) 00:12:17.780 fused_ordering(749) 00:12:17.780 fused_ordering(750) 00:12:17.780 fused_ordering(751) 00:12:17.780 fused_ordering(752) 00:12:17.780 fused_ordering(753) 00:12:17.780 fused_ordering(754) 00:12:17.780 fused_ordering(755) 00:12:17.780 fused_ordering(756) 00:12:17.780 fused_ordering(757) 00:12:17.780 fused_ordering(758) 00:12:17.780 fused_ordering(759) 00:12:17.780 fused_ordering(760) 00:12:17.780 fused_ordering(761) 00:12:17.780 fused_ordering(762) 00:12:17.780 fused_ordering(763) 00:12:17.780 fused_ordering(764) 00:12:17.780 fused_ordering(765) 00:12:17.780 fused_ordering(766) 00:12:17.780 fused_ordering(767) 00:12:17.780 fused_ordering(768) 00:12:17.780 fused_ordering(769) 00:12:17.780 fused_ordering(770) 00:12:17.780 fused_ordering(771) 00:12:17.780 fused_ordering(772) 00:12:17.780 fused_ordering(773) 00:12:17.780 fused_ordering(774) 00:12:17.780 fused_ordering(775) 00:12:17.780 fused_ordering(776) 00:12:17.780 fused_ordering(777) 00:12:17.780 fused_ordering(778) 00:12:17.780 fused_ordering(779) 00:12:17.780 fused_ordering(780) 00:12:17.780 fused_ordering(781) 00:12:17.780 fused_ordering(782) 00:12:17.780 fused_ordering(783) 00:12:17.780 fused_ordering(784) 00:12:17.780 fused_ordering(785) 00:12:17.780 fused_ordering(786) 00:12:17.780 fused_ordering(787) 00:12:17.780 fused_ordering(788) 00:12:17.780 fused_ordering(789) 00:12:17.780 fused_ordering(790) 00:12:17.780 fused_ordering(791) 00:12:17.780 fused_ordering(792) 00:12:17.780 fused_ordering(793) 00:12:17.780 fused_ordering(794) 00:12:17.780 fused_ordering(795) 00:12:17.780 fused_ordering(796) 00:12:17.780 fused_ordering(797) 00:12:17.780 fused_ordering(798) 00:12:17.780 fused_ordering(799) 00:12:17.780 fused_ordering(800) 00:12:17.780 fused_ordering(801) 00:12:17.780 fused_ordering(802) 00:12:17.780 fused_ordering(803) 00:12:17.780 fused_ordering(804) 00:12:17.780 fused_ordering(805) 00:12:17.780 fused_ordering(806) 00:12:17.780 fused_ordering(807) 00:12:17.780 fused_ordering(808) 00:12:17.780 fused_ordering(809) 00:12:17.780 fused_ordering(810) 00:12:17.780 fused_ordering(811) 00:12:17.780 fused_ordering(812) 00:12:17.780 fused_ordering(813) 00:12:17.780 fused_ordering(814) 00:12:17.780 fused_ordering(815) 00:12:17.780 fused_ordering(816) 00:12:17.780 fused_ordering(817) 00:12:17.780 fused_ordering(818) 00:12:17.780 fused_ordering(819) 00:12:17.780 fused_ordering(820) 00:12:18.348 fused_ordering(821) 00:12:18.348 fused_ordering(822) 00:12:18.348 fused_ordering(823) 00:12:18.348 fused_ordering(824) 00:12:18.348 fused_ordering(825) 00:12:18.348 fused_ordering(826) 00:12:18.348 fused_ordering(827) 00:12:18.348 fused_ordering(828) 00:12:18.348 fused_ordering(829) 00:12:18.348 fused_ordering(830) 00:12:18.348 fused_ordering(831) 00:12:18.348 fused_ordering(832) 00:12:18.348 fused_ordering(833) 00:12:18.348 fused_ordering(834) 00:12:18.348 fused_ordering(835) 00:12:18.348 fused_ordering(836) 00:12:18.348 fused_ordering(837) 00:12:18.348 fused_ordering(838) 00:12:18.348 fused_ordering(839) 00:12:18.348 fused_ordering(840) 00:12:18.348 fused_ordering(841) 00:12:18.348 fused_ordering(842) 00:12:18.348 fused_ordering(843) 00:12:18.348 fused_ordering(844) 00:12:18.348 fused_ordering(845) 00:12:18.348 fused_ordering(846) 00:12:18.348 fused_ordering(847) 00:12:18.348 fused_ordering(848) 00:12:18.348 fused_ordering(849) 00:12:18.348 fused_ordering(850) 00:12:18.348 fused_ordering(851) 00:12:18.348 fused_ordering(852) 00:12:18.348 fused_ordering(853) 00:12:18.348 fused_ordering(854) 00:12:18.348 fused_ordering(855) 00:12:18.348 fused_ordering(856) 00:12:18.348 fused_ordering(857) 00:12:18.348 fused_ordering(858) 00:12:18.348 fused_ordering(859) 00:12:18.348 fused_ordering(860) 00:12:18.348 fused_ordering(861) 00:12:18.348 fused_ordering(862) 00:12:18.348 fused_ordering(863) 00:12:18.348 fused_ordering(864) 00:12:18.348 fused_ordering(865) 00:12:18.348 fused_ordering(866) 00:12:18.348 fused_ordering(867) 00:12:18.348 fused_ordering(868) 00:12:18.348 fused_ordering(869) 00:12:18.348 fused_ordering(870) 00:12:18.348 fused_ordering(871) 00:12:18.348 fused_ordering(872) 00:12:18.348 fused_ordering(873) 00:12:18.348 fused_ordering(874) 00:12:18.348 fused_ordering(875) 00:12:18.348 fused_ordering(876) 00:12:18.348 fused_ordering(877) 00:12:18.348 fused_ordering(878) 00:12:18.348 fused_ordering(879) 00:12:18.348 fused_ordering(880) 00:12:18.348 fused_ordering(881) 00:12:18.348 fused_ordering(882) 00:12:18.348 fused_ordering(883) 00:12:18.348 fused_ordering(884) 00:12:18.348 fused_ordering(885) 00:12:18.348 fused_ordering(886) 00:12:18.348 fused_ordering(887) 00:12:18.348 fused_ordering(888) 00:12:18.348 fused_ordering(889) 00:12:18.348 fused_ordering(890) 00:12:18.348 fused_ordering(891) 00:12:18.348 fused_ordering(892) 00:12:18.348 fused_ordering(893) 00:12:18.348 fused_ordering(894) 00:12:18.348 fused_ordering(895) 00:12:18.348 fused_ordering(896) 00:12:18.348 fused_ordering(897) 00:12:18.348 fused_ordering(898) 00:12:18.348 fused_ordering(899) 00:12:18.348 fused_ordering(900) 00:12:18.348 fused_ordering(901) 00:12:18.348 fused_ordering(902) 00:12:18.348 fused_ordering(903) 00:12:18.348 fused_ordering(904) 00:12:18.348 fused_ordering(905) 00:12:18.348 fused_ordering(906) 00:12:18.348 fused_ordering(907) 00:12:18.348 fused_ordering(908) 00:12:18.348 fused_ordering(909) 00:12:18.348 fused_ordering(910) 00:12:18.348 fused_ordering(911) 00:12:18.348 fused_ordering(912) 00:12:18.348 fused_ordering(913) 00:12:18.348 fused_ordering(914) 00:12:18.348 fused_ordering(915) 00:12:18.348 fused_ordering(916) 00:12:18.348 fused_ordering(917) 00:12:18.348 fused_ordering(918) 00:12:18.348 fused_ordering(919) 00:12:18.348 fused_ordering(920) 00:12:18.348 fused_ordering(921) 00:12:18.348 fused_ordering(922) 00:12:18.348 fused_ordering(923) 00:12:18.348 fused_ordering(924) 00:12:18.348 fused_ordering(925) 00:12:18.348 fused_ordering(926) 00:12:18.348 fused_ordering(927) 00:12:18.348 fused_ordering(928) 00:12:18.348 fused_ordering(929) 00:12:18.348 fused_ordering(930) 00:12:18.348 fused_ordering(931) 00:12:18.348 fused_ordering(932) 00:12:18.348 fused_ordering(933) 00:12:18.348 fused_ordering(934) 00:12:18.348 fused_ordering(935) 00:12:18.348 fused_ordering(936) 00:12:18.348 fused_ordering(937) 00:12:18.348 fused_ordering(938) 00:12:18.348 fused_ordering(939) 00:12:18.348 fused_ordering(940) 00:12:18.348 fused_ordering(941) 00:12:18.348 fused_ordering(942) 00:12:18.348 fused_ordering(943) 00:12:18.348 fused_ordering(944) 00:12:18.348 fused_ordering(945) 00:12:18.348 fused_ordering(946) 00:12:18.348 fused_ordering(947) 00:12:18.348 fused_ordering(948) 00:12:18.348 fused_ordering(949) 00:12:18.348 fused_ordering(950) 00:12:18.348 fused_ordering(951) 00:12:18.348 fused_ordering(952) 00:12:18.348 fused_ordering(953) 00:12:18.348 fused_ordering(954) 00:12:18.348 fused_ordering(955) 00:12:18.348 fused_ordering(956) 00:12:18.348 fused_ordering(957) 00:12:18.348 fused_ordering(958) 00:12:18.348 fused_ordering(959) 00:12:18.348 fused_ordering(960) 00:12:18.348 fused_ordering(961) 00:12:18.348 fused_ordering(962) 00:12:18.348 fused_ordering(963) 00:12:18.348 fused_ordering(964) 00:12:18.348 fused_ordering(965) 00:12:18.348 fused_ordering(966) 00:12:18.348 fused_ordering(967) 00:12:18.348 fused_ordering(968) 00:12:18.348 fused_ordering(969) 00:12:18.348 fused_ordering(970) 00:12:18.348 fused_ordering(971) 00:12:18.348 fused_ordering(972) 00:12:18.348 fused_ordering(973) 00:12:18.348 fused_ordering(974) 00:12:18.348 fused_ordering(975) 00:12:18.348 fused_ordering(976) 00:12:18.348 fused_ordering(977) 00:12:18.348 fused_ordering(978) 00:12:18.348 fused_ordering(979) 00:12:18.348 fused_ordering(980) 00:12:18.348 fused_ordering(981) 00:12:18.348 fused_ordering(982) 00:12:18.348 fused_ordering(983) 00:12:18.348 fused_ordering(984) 00:12:18.348 fused_ordering(985) 00:12:18.348 fused_ordering(986) 00:12:18.348 fused_ordering(987) 00:12:18.348 fused_ordering(988) 00:12:18.348 fused_ordering(989) 00:12:18.348 fused_ordering(990) 00:12:18.348 fused_ordering(991) 00:12:18.348 fused_ordering(992) 00:12:18.348 fused_ordering(993) 00:12:18.348 fused_ordering(994) 00:12:18.348 fused_ordering(995) 00:12:18.348 fused_ordering(996) 00:12:18.348 fused_ordering(997) 00:12:18.348 fused_ordering(998) 00:12:18.348 fused_ordering(999) 00:12:18.348 fused_ordering(1000) 00:12:18.348 fused_ordering(1001) 00:12:18.348 fused_ordering(1002) 00:12:18.348 fused_ordering(1003) 00:12:18.348 fused_ordering(1004) 00:12:18.348 fused_ordering(1005) 00:12:18.348 fused_ordering(1006) 00:12:18.348 fused_ordering(1007) 00:12:18.348 fused_ordering(1008) 00:12:18.348 fused_ordering(1009) 00:12:18.348 fused_ordering(1010) 00:12:18.348 fused_ordering(1011) 00:12:18.348 fused_ordering(1012) 00:12:18.348 fused_ordering(1013) 00:12:18.348 fused_ordering(1014) 00:12:18.348 fused_ordering(1015) 00:12:18.348 fused_ordering(1016) 00:12:18.348 fused_ordering(1017) 00:12:18.348 fused_ordering(1018) 00:12:18.348 fused_ordering(1019) 00:12:18.348 fused_ordering(1020) 00:12:18.348 fused_ordering(1021) 00:12:18.348 fused_ordering(1022) 00:12:18.348 fused_ordering(1023) 00:12:18.348 19:12:55 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:18.348 19:12:55 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:18.348 19:12:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:18.348 19:12:55 -- nvmf/common.sh@116 -- # sync 00:12:18.348 19:12:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:18.348 19:12:55 -- nvmf/common.sh@119 -- # set +e 00:12:18.348 19:12:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:18.348 19:12:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:18.348 rmmod nvme_tcp 00:12:18.348 rmmod nvme_fabrics 00:12:18.348 rmmod nvme_keyring 00:12:18.348 19:12:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:18.348 19:12:55 -- nvmf/common.sh@123 -- # set -e 00:12:18.348 19:12:55 -- nvmf/common.sh@124 -- # return 0 00:12:18.348 19:12:55 -- nvmf/common.sh@477 -- # '[' -n 69004 ']' 00:12:18.348 19:12:55 -- nvmf/common.sh@478 -- # killprocess 69004 00:12:18.348 19:12:55 -- common/autotest_common.sh@924 -- # '[' -z 69004 ']' 00:12:18.348 19:12:55 -- common/autotest_common.sh@928 -- # kill -0 69004 00:12:18.348 19:12:55 -- common/autotest_common.sh@929 -- # uname 00:12:18.348 19:12:55 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:12:18.348 19:12:55 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 69004 00:12:18.348 killing process with pid 69004 00:12:18.348 19:12:55 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:12:18.348 19:12:55 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:12:18.348 19:12:55 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 69004' 00:12:18.348 19:12:55 -- common/autotest_common.sh@943 -- # kill 69004 00:12:18.348 19:12:55 -- common/autotest_common.sh@948 -- # wait 69004 00:12:18.608 19:12:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:18.608 19:12:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:18.608 19:12:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:18.608 19:12:55 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:18.608 19:12:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:18.608 19:12:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.608 19:12:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:18.608 19:12:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.608 19:12:56 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:18.867 00:12:18.867 real 0m4.357s 00:12:18.867 user 0m4.944s 00:12:18.867 sys 0m1.585s 00:12:18.867 19:12:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:18.867 19:12:56 -- common/autotest_common.sh@10 -- # set +x 00:12:18.867 ************************************ 00:12:18.867 END TEST nvmf_fused_ordering 00:12:18.867 ************************************ 00:12:18.867 19:12:56 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:18.867 19:12:56 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:12:18.867 19:12:56 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:18.867 19:12:56 -- common/autotest_common.sh@10 -- # set +x 00:12:18.867 ************************************ 00:12:18.867 START TEST nvmf_delete_subsystem 00:12:18.867 ************************************ 00:12:18.867 19:12:56 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:18.867 * Looking for test storage... 00:12:18.867 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:18.867 19:12:56 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:18.867 19:12:56 -- nvmf/common.sh@7 -- # uname -s 00:12:18.867 19:12:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.867 19:12:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.867 19:12:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.867 19:12:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.867 19:12:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.867 19:12:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.867 19:12:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.867 19:12:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.867 19:12:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.867 19:12:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.867 19:12:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:12:18.867 19:12:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:12:18.867 19:12:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.867 19:12:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.867 19:12:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:18.867 19:12:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:18.867 19:12:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.867 19:12:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.867 19:12:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.867 19:12:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.867 19:12:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.867 19:12:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.867 19:12:56 -- paths/export.sh@5 -- # export PATH 00:12:18.867 19:12:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.867 19:12:56 -- nvmf/common.sh@46 -- # : 0 00:12:18.867 19:12:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:18.867 19:12:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:18.867 19:12:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:18.867 19:12:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.867 19:12:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.867 19:12:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:18.867 19:12:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:18.867 19:12:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:18.867 19:12:56 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:18.867 19:12:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:18.867 19:12:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:18.867 19:12:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:18.867 19:12:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:18.867 19:12:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:18.867 19:12:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.867 19:12:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:18.867 19:12:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.867 19:12:56 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:18.867 19:12:56 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:18.867 19:12:56 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:18.867 19:12:56 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:18.867 19:12:56 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:18.867 19:12:56 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:18.867 19:12:56 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:18.867 19:12:56 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:18.867 19:12:56 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:18.867 19:12:56 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:18.867 19:12:56 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:18.867 19:12:56 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:18.867 19:12:56 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:18.867 19:12:56 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:18.867 19:12:56 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:18.867 19:12:56 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:18.867 19:12:56 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:18.868 19:12:56 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:18.868 19:12:56 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:18.868 19:12:56 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:18.868 Cannot find device "nvmf_tgt_br" 00:12:18.868 19:12:56 -- nvmf/common.sh@154 -- # true 00:12:18.868 19:12:56 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:18.868 Cannot find device "nvmf_tgt_br2" 00:12:18.868 19:12:56 -- nvmf/common.sh@155 -- # true 00:12:18.868 19:12:56 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:18.868 19:12:56 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:18.868 Cannot find device "nvmf_tgt_br" 00:12:18.868 19:12:56 -- nvmf/common.sh@157 -- # true 00:12:18.868 19:12:56 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:18.868 Cannot find device "nvmf_tgt_br2" 00:12:18.868 19:12:56 -- nvmf/common.sh@158 -- # true 00:12:18.868 19:12:56 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:19.126 19:12:56 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:19.126 19:12:56 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:19.126 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:19.126 19:12:56 -- nvmf/common.sh@161 -- # true 00:12:19.126 19:12:56 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:19.126 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:19.126 19:12:56 -- nvmf/common.sh@162 -- # true 00:12:19.126 19:12:56 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:19.126 19:12:56 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:19.126 19:12:56 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:19.126 19:12:56 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:19.126 19:12:56 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:19.126 19:12:56 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:19.126 19:12:56 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:19.126 19:12:56 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:19.126 19:12:56 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:19.126 19:12:56 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:19.126 19:12:56 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:19.126 19:12:56 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:19.126 19:12:56 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:19.126 19:12:56 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:19.126 19:12:56 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:19.126 19:12:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:19.126 19:12:56 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:19.127 19:12:56 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:19.127 19:12:56 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:19.127 19:12:56 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:19.127 19:12:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:19.127 19:12:56 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:19.127 19:12:56 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:19.127 19:12:56 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:19.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:19.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:12:19.127 00:12:19.127 --- 10.0.0.2 ping statistics --- 00:12:19.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.127 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:12:19.127 19:12:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:19.127 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:19.127 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:12:19.127 00:12:19.127 --- 10.0.0.3 ping statistics --- 00:12:19.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.127 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:12:19.127 19:12:56 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:19.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:19.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:12:19.127 00:12:19.127 --- 10.0.0.1 ping statistics --- 00:12:19.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.127 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:12:19.127 19:12:56 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:19.127 19:12:56 -- nvmf/common.sh@421 -- # return 0 00:12:19.127 19:12:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:19.127 19:12:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:19.127 19:12:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:19.127 19:12:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:19.127 19:12:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:19.127 19:12:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:19.127 19:12:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:19.127 19:12:56 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:19.127 19:12:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:19.127 19:12:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:19.127 19:12:56 -- common/autotest_common.sh@10 -- # set +x 00:12:19.386 19:12:56 -- nvmf/common.sh@469 -- # nvmfpid=69263 00:12:19.386 19:12:56 -- nvmf/common.sh@470 -- # waitforlisten 69263 00:12:19.386 19:12:56 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:19.386 19:12:56 -- common/autotest_common.sh@817 -- # '[' -z 69263 ']' 00:12:19.386 19:12:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.386 19:12:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:19.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.386 19:12:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.386 19:12:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:19.386 19:12:56 -- common/autotest_common.sh@10 -- # set +x 00:12:19.386 [2024-02-14 19:12:56.595700] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:12:19.386 [2024-02-14 19:12:56.595817] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.386 [2024-02-14 19:12:56.730074] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:19.645 [2024-02-14 19:12:56.857078] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:19.645 [2024-02-14 19:12:56.857502] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:19.645 [2024-02-14 19:12:56.857671] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:19.645 [2024-02-14 19:12:56.857817] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:19.645 [2024-02-14 19:12:56.858160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.645 [2024-02-14 19:12:56.858159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.213 19:12:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:20.213 19:12:57 -- common/autotest_common.sh@850 -- # return 0 00:12:20.213 19:12:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:20.213 19:12:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:20.213 19:12:57 -- common/autotest_common.sh@10 -- # set +x 00:12:20.213 19:12:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.213 19:12:57 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:20.213 19:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:20.213 19:12:57 -- common/autotest_common.sh@10 -- # set +x 00:12:20.213 [2024-02-14 19:12:57.621891] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:20.213 19:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:20.213 19:12:57 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:20.213 19:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:20.213 19:12:57 -- common/autotest_common.sh@10 -- # set +x 00:12:20.472 19:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:20.472 19:12:57 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.472 19:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:20.472 19:12:57 -- common/autotest_common.sh@10 -- # set +x 00:12:20.472 [2024-02-14 19:12:57.638831] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.472 19:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:20.472 19:12:57 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:20.472 19:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:20.472 19:12:57 -- common/autotest_common.sh@10 -- # set +x 00:12:20.472 NULL1 00:12:20.472 19:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:20.472 19:12:57 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:20.472 19:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:20.472 19:12:57 -- common/autotest_common.sh@10 -- # set +x 00:12:20.472 Delay0 00:12:20.472 19:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:20.472 19:12:57 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:20.472 19:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:20.472 19:12:57 -- common/autotest_common.sh@10 -- # set +x 00:12:20.472 19:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:20.472 19:12:57 -- target/delete_subsystem.sh@28 -- # perf_pid=69314 00:12:20.472 19:12:57 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:20.472 19:12:57 -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:20.472 [2024-02-14 19:12:57.833153] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:22.376 19:12:59 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.376 19:12:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:22.376 19:12:59 -- common/autotest_common.sh@10 -- # set +x 00:12:22.635 Write completed with error (sct=0, sc=8) 00:12:22.635 Read completed with error (sct=0, sc=8) 00:12:22.635 Read completed with error (sct=0, sc=8) 00:12:22.635 starting I/O failed: -6 00:12:22.635 Read completed with error (sct=0, sc=8) 00:12:22.635 Read completed with error (sct=0, sc=8) 00:12:22.635 Read completed with error (sct=0, sc=8) 00:12:22.635 Write completed with error (sct=0, sc=8) 00:12:22.635 starting I/O failed: -6 00:12:22.635 Read completed with error (sct=0, sc=8) 00:12:22.635 Write completed with error (sct=0, sc=8) 00:12:22.635 Read completed with error (sct=0, sc=8) 00:12:22.635 Write completed with error (sct=0, sc=8) 00:12:22.635 starting I/O failed: -6 00:12:22.635 Read completed with error (sct=0, sc=8) 00:12:22.635 Write completed with error (sct=0, sc=8) 00:12:22.635 Read completed with error (sct=0, sc=8) 00:12:22.635 Read completed with error (sct=0, sc=8) 00:12:22.636 starting I/O failed: -6 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 starting I/O failed: -6 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 starting I/O failed: -6 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 starting I/O failed: -6 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 starting I/O failed: -6 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 starting I/O failed: -6 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 starting I/O failed: -6 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 [2024-02-14 19:12:59.885311] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4b2c000c00 is same with the state(5) to be set 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 [2024-02-14 19:12:59.886430] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4b2c00c230 is same with the state(5) to be set 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 starting I/O failed: -6 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 starting I/O failed: -6 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 starting I/O failed: -6 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 starting I/O failed: -6 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 starting I/O failed: -6 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 starting I/O failed: -6 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 starting I/O failed: -6 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 starting I/O failed: -6 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 starting I/O failed: -6 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 starting I/O failed: -6 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 starting I/O failed: -6 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 [2024-02-14 19:12:59.887504] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x582dc0 is same with the state(5) to be set 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Read completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.636 Write completed with error (sct=0, sc=8) 00:12:22.637 Read completed with error (sct=0, sc=8) 00:12:22.637 Read completed with error (sct=0, sc=8) 00:12:22.637 Read completed with error (sct=0, sc=8) 00:12:22.637 Write completed with error (sct=0, sc=8) 00:12:23.570 [2024-02-14 19:13:00.849342] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x582450 is same with the state(5) to be set 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 [2024-02-14 19:13:00.885123] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x583160 is same with the state(5) to be set 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 [2024-02-14 19:13:00.885415] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x581ee0 is same with the state(5) to be set 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 [2024-02-14 19:13:00.887372] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4b2c00bf80 is same with the state(5) to be set 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Write completed with error (sct=0, sc=8) 00:12:23.570 Read completed with error (sct=0, sc=8) 00:12:23.570 [2024-02-14 19:13:00.888379] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4b2c00c4e0 is same with the state(5) to be set 00:12:23.570 19:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:23.570 19:13:00 -- target/delete_subsystem.sh@34 -- # delay=0 00:12:23.570 19:13:00 -- target/delete_subsystem.sh@35 -- # kill -0 69314 00:12:23.570 19:13:00 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:23.570 [2024-02-14 19:13:00.890771] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x582450 (9): Bad file descriptor 00:12:23.570 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:23.570 Initializing NVMe Controllers 00:12:23.570 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:23.570 Controller IO queue size 128, less than required. 00:12:23.570 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:23.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:23.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:23.570 Initialization complete. Launching workers. 00:12:23.570 ======================================================== 00:12:23.570 Latency(us) 00:12:23.570 Device Information : IOPS MiB/s Average min max 00:12:23.571 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.53 0.08 938173.26 437.03 2002251.58 00:12:23.571 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 158.64 0.08 949944.02 1035.67 2004543.58 00:12:23.571 ======================================================== 00:12:23.571 Total : 327.17 0.16 943880.83 437.03 2004543.58 00:12:23.571 00:12:24.137 19:13:01 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:24.137 19:13:01 -- target/delete_subsystem.sh@35 -- # kill -0 69314 00:12:24.137 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (69314) - No such process 00:12:24.137 19:13:01 -- target/delete_subsystem.sh@45 -- # NOT wait 69314 00:12:24.137 19:13:01 -- common/autotest_common.sh@638 -- # local es=0 00:12:24.137 19:13:01 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 69314 00:12:24.137 19:13:01 -- common/autotest_common.sh@626 -- # local arg=wait 00:12:24.137 19:13:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:24.137 19:13:01 -- common/autotest_common.sh@630 -- # type -t wait 00:12:24.137 19:13:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:24.137 19:13:01 -- common/autotest_common.sh@641 -- # wait 69314 00:12:24.137 19:13:01 -- common/autotest_common.sh@641 -- # es=1 00:12:24.137 19:13:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:24.137 19:13:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:24.137 19:13:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:24.137 19:13:01 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:24.137 19:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.137 19:13:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.137 19:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.137 19:13:01 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.137 19:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.137 19:13:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.137 [2024-02-14 19:13:01.412837] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.137 19:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.138 19:13:01 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:24.138 19:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.138 19:13:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.138 19:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.138 19:13:01 -- target/delete_subsystem.sh@54 -- # perf_pid=69361 00:12:24.138 19:13:01 -- target/delete_subsystem.sh@56 -- # delay=0 00:12:24.138 19:13:01 -- target/delete_subsystem.sh@57 -- # kill -0 69361 00:12:24.138 19:13:01 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:24.138 19:13:01 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:24.395 [2024-02-14 19:13:01.590231] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:24.652 19:13:01 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:24.652 19:13:01 -- target/delete_subsystem.sh@57 -- # kill -0 69361 00:12:24.652 19:13:01 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:25.219 19:13:02 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:25.219 19:13:02 -- target/delete_subsystem.sh@57 -- # kill -0 69361 00:12:25.219 19:13:02 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:25.786 19:13:02 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:25.786 19:13:02 -- target/delete_subsystem.sh@57 -- # kill -0 69361 00:12:25.786 19:13:02 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:26.048 19:13:03 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:26.048 19:13:03 -- target/delete_subsystem.sh@57 -- # kill -0 69361 00:12:26.048 19:13:03 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:26.616 19:13:03 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:26.616 19:13:03 -- target/delete_subsystem.sh@57 -- # kill -0 69361 00:12:26.616 19:13:03 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:27.183 19:13:04 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:27.183 19:13:04 -- target/delete_subsystem.sh@57 -- # kill -0 69361 00:12:27.183 19:13:04 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:27.442 Initializing NVMe Controllers 00:12:27.442 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:27.442 Controller IO queue size 128, less than required. 00:12:27.442 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:27.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:27.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:27.442 Initialization complete. Launching workers. 00:12:27.442 ======================================================== 00:12:27.442 Latency(us) 00:12:27.442 Device Information : IOPS MiB/s Average min max 00:12:27.442 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004527.63 1000153.96 1042496.43 00:12:27.442 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006791.23 1000151.36 1020648.48 00:12:27.442 ======================================================== 00:12:27.442 Total : 256.00 0.12 1005659.43 1000151.36 1042496.43 00:12:27.442 00:12:27.701 19:13:04 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:27.701 19:13:04 -- target/delete_subsystem.sh@57 -- # kill -0 69361 00:12:27.701 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (69361) - No such process 00:12:27.701 19:13:04 -- target/delete_subsystem.sh@67 -- # wait 69361 00:12:27.701 19:13:04 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:27.701 19:13:04 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:27.701 19:13:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:27.701 19:13:04 -- nvmf/common.sh@116 -- # sync 00:12:27.701 19:13:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:27.701 19:13:05 -- nvmf/common.sh@119 -- # set +e 00:12:27.701 19:13:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:27.701 19:13:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:27.701 rmmod nvme_tcp 00:12:27.701 rmmod nvme_fabrics 00:12:27.701 rmmod nvme_keyring 00:12:27.701 19:13:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:27.701 19:13:05 -- nvmf/common.sh@123 -- # set -e 00:12:27.701 19:13:05 -- nvmf/common.sh@124 -- # return 0 00:12:27.701 19:13:05 -- nvmf/common.sh@477 -- # '[' -n 69263 ']' 00:12:27.701 19:13:05 -- nvmf/common.sh@478 -- # killprocess 69263 00:12:27.701 19:13:05 -- common/autotest_common.sh@924 -- # '[' -z 69263 ']' 00:12:27.701 19:13:05 -- common/autotest_common.sh@928 -- # kill -0 69263 00:12:27.701 19:13:05 -- common/autotest_common.sh@929 -- # uname 00:12:27.701 19:13:05 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:12:27.701 19:13:05 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 69263 00:12:27.701 19:13:05 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:12:27.701 19:13:05 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:12:27.701 killing process with pid 69263 00:12:27.701 19:13:05 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 69263' 00:12:27.701 19:13:05 -- common/autotest_common.sh@943 -- # kill 69263 00:12:27.701 19:13:05 -- common/autotest_common.sh@948 -- # wait 69263 00:12:27.959 19:13:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:27.959 19:13:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:27.959 19:13:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:27.959 19:13:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:27.959 19:13:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:27.959 19:13:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.959 19:13:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:27.959 19:13:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.218 19:13:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:28.218 ************************************ 00:12:28.218 END TEST nvmf_delete_subsystem 00:12:28.218 ************************************ 00:12:28.218 00:12:28.218 real 0m9.315s 00:12:28.218 user 0m28.942s 00:12:28.218 sys 0m1.284s 00:12:28.218 19:13:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:28.218 19:13:05 -- common/autotest_common.sh@10 -- # set +x 00:12:28.218 19:13:05 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:12:28.219 19:13:05 -- nvmf/nvmf.sh@39 -- # [[ 1 -eq 1 ]] 00:12:28.219 19:13:05 -- nvmf/nvmf.sh@40 -- # run_test nvmf_vfio_user /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:28.219 19:13:05 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:12:28.219 19:13:05 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:28.219 19:13:05 -- common/autotest_common.sh@10 -- # set +x 00:12:28.219 ************************************ 00:12:28.219 START TEST nvmf_vfio_user 00:12:28.219 ************************************ 00:12:28.219 19:13:05 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:28.219 * Looking for test storage... 00:12:28.219 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:28.219 19:13:05 -- target/nvmf_vfio_user.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:28.219 19:13:05 -- nvmf/common.sh@7 -- # uname -s 00:12:28.219 19:13:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:28.219 19:13:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:28.219 19:13:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:28.219 19:13:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:28.219 19:13:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:28.219 19:13:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:28.219 19:13:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:28.219 19:13:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:28.219 19:13:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:28.219 19:13:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:28.219 19:13:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:12:28.219 19:13:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:12:28.219 19:13:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:28.219 19:13:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:28.219 19:13:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:28.219 19:13:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:28.219 19:13:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.219 19:13:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.219 19:13:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.219 19:13:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.219 19:13:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.219 19:13:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.219 19:13:05 -- paths/export.sh@5 -- # export PATH 00:12:28.219 19:13:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.219 19:13:05 -- nvmf/common.sh@46 -- # : 0 00:12:28.219 19:13:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:28.219 19:13:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:28.219 19:13:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:28.219 19:13:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:28.219 19:13:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:28.219 19:13:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:28.219 19:13:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:28.219 19:13:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:28.219 19:13:05 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:28.219 19:13:05 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:28.219 19:13:05 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:28.219 19:13:05 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:28.219 19:13:05 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:28.219 19:13:05 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:28.219 19:13:05 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:28.219 19:13:05 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:28.219 19:13:05 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:28.219 19:13:05 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:28.219 19:13:05 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=69483 00:12:28.219 Process pid: 69483 00:12:28.219 19:13:05 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 69483' 00:12:28.219 19:13:05 -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:28.219 19:13:05 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:28.219 19:13:05 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 69483 00:12:28.219 19:13:05 -- common/autotest_common.sh@817 -- # '[' -z 69483 ']' 00:12:28.219 19:13:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.219 19:13:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:28.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.219 19:13:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.219 19:13:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:28.219 19:13:05 -- common/autotest_common.sh@10 -- # set +x 00:12:28.219 [2024-02-14 19:13:05.626530] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:12:28.219 [2024-02-14 19:13:05.627235] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.478 [2024-02-14 19:13:05.769701] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:28.478 [2024-02-14 19:13:05.873089] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:28.478 [2024-02-14 19:13:05.873243] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:28.478 [2024-02-14 19:13:05.873260] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:28.478 [2024-02-14 19:13:05.873271] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:28.478 [2024-02-14 19:13:05.873532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.478 [2024-02-14 19:13:05.874333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.478 [2024-02-14 19:13:05.874572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:28.478 [2024-02-14 19:13:05.874602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.452 19:13:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:29.452 19:13:06 -- common/autotest_common.sh@850 -- # return 0 00:12:29.452 19:13:06 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:30.385 19:13:07 -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:30.643 19:13:07 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:30.643 19:13:07 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:30.643 19:13:07 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:30.643 19:13:07 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:30.643 19:13:07 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:30.901 Malloc1 00:12:30.901 19:13:08 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:31.159 19:13:08 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:31.417 19:13:08 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:31.675 19:13:08 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:31.675 19:13:09 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:31.675 19:13:09 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:31.934 Malloc2 00:12:31.934 19:13:09 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:32.192 19:13:09 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:32.450 19:13:09 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:32.709 19:13:10 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:32.709 19:13:10 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:32.709 19:13:10 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:32.709 19:13:10 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:32.709 19:13:10 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:32.709 19:13:10 -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:32.709 [2024-02-14 19:13:10.101816] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:12:32.709 [2024-02-14 19:13:10.101863] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69619 ] 00:12:32.969 [2024-02-14 19:13:10.239054] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:32.969 [2024-02-14 19:13:10.248028] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:32.969 [2024-02-14 19:13:10.248074] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7efdaeaf2000 00:12:32.969 [2024-02-14 19:13:10.249034] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:32.969 [2024-02-14 19:13:10.250025] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:32.969 [2024-02-14 19:13:10.251019] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:32.969 [2024-02-14 19:13:10.252021] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:32.969 [2024-02-14 19:13:10.253024] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:32.969 [2024-02-14 19:13:10.254033] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:32.969 [2024-02-14 19:13:10.255044] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:32.969 [2024-02-14 19:13:10.256046] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:32.969 [2024-02-14 19:13:10.257064] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:32.969 [2024-02-14 19:13:10.257104] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7efdae1d0000 00:12:32.969 [2024-02-14 19:13:10.258595] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:32.969 [2024-02-14 19:13:10.280562] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:32.969 [2024-02-14 19:13:10.280641] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:32.969 [2024-02-14 19:13:10.283205] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:32.969 [2024-02-14 19:13:10.283288] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:32.969 [2024-02-14 19:13:10.283394] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:32.969 [2024-02-14 19:13:10.283417] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:32.969 [2024-02-14 19:13:10.283424] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:32.969 [2024-02-14 19:13:10.284178] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:32.969 [2024-02-14 19:13:10.284217] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:32.969 [2024-02-14 19:13:10.284229] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:32.969 [2024-02-14 19:13:10.285177] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:32.969 [2024-02-14 19:13:10.285201] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:32.969 [2024-02-14 19:13:10.285217] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:32.969 [2024-02-14 19:13:10.286187] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:32.969 [2024-02-14 19:13:10.286209] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:32.969 [2024-02-14 19:13:10.287192] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:32.969 [2024-02-14 19:13:10.287215] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:32.969 [2024-02-14 19:13:10.287231] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:32.969 [2024-02-14 19:13:10.287241] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:32.969 [2024-02-14 19:13:10.287347] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:32.970 [2024-02-14 19:13:10.287353] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:32.970 [2024-02-14 19:13:10.287361] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:32.970 [2024-02-14 19:13:10.288200] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:32.970 [2024-02-14 19:13:10.289201] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:32.970 [2024-02-14 19:13:10.290208] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:32.970 [2024-02-14 19:13:10.291272] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:32.970 [2024-02-14 19:13:10.292223] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:32.970 [2024-02-14 19:13:10.292244] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:32.970 [2024-02-14 19:13:10.292260] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:32.970 [2024-02-14 19:13:10.292283] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:32.970 [2024-02-14 19:13:10.292295] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:32.970 [2024-02-14 19:13:10.292316] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:32.970 [2024-02-14 19:13:10.292322] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:32.970 [2024-02-14 19:13:10.292342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:32.970 [2024-02-14 19:13:10.292410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:32.970 [2024-02-14 19:13:10.292422] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:32.970 [2024-02-14 19:13:10.292434] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:32.970 [2024-02-14 19:13:10.292439] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:32.970 [2024-02-14 19:13:10.292445] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:32.970 [2024-02-14 19:13:10.292450] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:32.970 [2024-02-14 19:13:10.292455] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:32.970 [2024-02-14 19:13:10.292461] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:32.970 [2024-02-14 19:13:10.292474] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:32.970 [2024-02-14 19:13:10.292505] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:32.970 [2024-02-14 19:13:10.292534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:32.970 [2024-02-14 19:13:10.292551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.970 [2024-02-14 19:13:10.292561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.970 [2024-02-14 19:13:10.292570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.970 [2024-02-14 19:13:10.292580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.970 [2024-02-14 19:13:10.292585] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:32.970 [2024-02-14 19:13:10.292597] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:32.970 [2024-02-14 19:13:10.292608] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:32.970 [2024-02-14 19:13:10.292619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:32.970 [2024-02-14 19:13:10.292626] nvme_ctrlr.c:2877:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:32.970 [2024-02-14 19:13:10.292632] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:32.970 [2024-02-14 19:13:10.292640] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:32.970 [2024-02-14 19:13:10.292648] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:32.970 [2024-02-14 19:13:10.292657] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:32.970 [2024-02-14 19:13:10.292679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:32.970 [2024-02-14 19:13:10.292731] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:32.970 [2024-02-14 19:13:10.292742] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:32.970 [2024-02-14 19:13:10.292751] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:32.970 [2024-02-14 19:13:10.292757] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:32.970 [2024-02-14 19:13:10.292765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:32.970 [2024-02-14 19:13:10.292781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:32.970 [2024-02-14 19:13:10.292799] nvme_ctrlr.c:4544:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:32.970 [2024-02-14 19:13:10.292811] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:32.970 [2024-02-14 19:13:10.292822] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:32.970 [2024-02-14 19:13:10.292830] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:32.970 [2024-02-14 19:13:10.292835] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:32.970 [2024-02-14 19:13:10.292842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:32.970 [2024-02-14 19:13:10.292878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:32.970 [2024-02-14 19:13:10.292895] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:32.970 [2024-02-14 19:13:10.292905] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:32.970 [2024-02-14 19:13:10.292913] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:32.970 [2024-02-14 19:13:10.292918] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:32.970 [2024-02-14 19:13:10.292925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:32.970 [2024-02-14 19:13:10.292938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:32.970 [2024-02-14 19:13:10.292947] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:32.970 [2024-02-14 19:13:10.292956] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:32.970 [2024-02-14 19:13:10.292965] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:32.970 [2024-02-14 19:13:10.292972] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:32.970 [2024-02-14 19:13:10.292978] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:32.970 [2024-02-14 19:13:10.292983] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:32.970 [2024-02-14 19:13:10.292988] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:32.970 [2024-02-14 19:13:10.292994] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:32.970 [2024-02-14 19:13:10.293016] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:32.970 [2024-02-14 19:13:10.293028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:32.970 [2024-02-14 19:13:10.293043] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:32.970 [2024-02-14 19:13:10.293054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:32.970 [2024-02-14 19:13:10.293069] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:32.970 [2024-02-14 19:13:10.293080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:32.970 [2024-02-14 19:13:10.293093] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:32.970 [2024-02-14 19:13:10.293108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:32.970 [2024-02-14 19:13:10.293121] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:32.970 [2024-02-14 19:13:10.293126] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:32.970 [2024-02-14 19:13:10.293130] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:32.970 [2024-02-14 19:13:10.293134] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:32.970 [2024-02-14 19:13:10.293141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:32.971 [2024-02-14 19:13:10.293150] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:32.971 [2024-02-14 19:13:10.293154] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:32.971 [2024-02-14 19:13:10.293161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:32.971 [2024-02-14 19:13:10.293168] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:32.971 [2024-02-14 19:13:10.293173] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:32.971 [2024-02-14 19:13:10.293179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:32.971 [2024-02-14 19:13:10.293188] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:32.971 [2024-02-14 19:13:10.293193] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:32.971 ===================================================== 00:12:32.971 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:32.971 ===================================================== 00:12:32.971 Controller Capabilities/Features 00:12:32.971 ================================ 00:12:32.971 Vendor ID: 4e58 00:12:32.971 Subsystem Vendor ID: 4e58 00:12:32.971 Serial Number: SPDK1 00:12:32.971 Model Number: SPDK bdev Controller 00:12:32.971 Firmware Version: 24.05 00:12:32.971 Recommended Arb Burst: 6 00:12:32.971 IEEE OUI Identifier: 8d 6b 50 00:12:32.971 Multi-path I/O 00:12:32.971 May have multiple subsystem ports: Yes 00:12:32.971 May have multiple controllers: Yes 00:12:32.971 Associated with SR-IOV VF: No 00:12:32.971 Max Data Transfer Size: 131072 00:12:32.971 Max Number of Namespaces: 32 00:12:32.971 Max Number of I/O Queues: 127 00:12:32.971 NVMe Specification Version (VS): 1.3 00:12:32.971 NVMe Specification Version (Identify): 1.3 00:12:32.971 Maximum Queue Entries: 256 00:12:32.971 Contiguous Queues Required: Yes 00:12:32.971 Arbitration Mechanisms Supported 00:12:32.971 Weighted Round Robin: Not Supported 00:12:32.971 Vendor Specific: Not Supported 00:12:32.971 Reset Timeout: 15000 ms 00:12:32.971 Doorbell Stride: 4 bytes 00:12:32.971 NVM Subsystem Reset: Not Supported 00:12:32.971 Command Sets Supported 00:12:32.971 NVM Command Set: Supported 00:12:32.971 Boot Partition: Not Supported 00:12:32.971 Memory Page Size Minimum: 4096 bytes 00:12:32.971 Memory Page Size Maximum: 4096 bytes 00:12:32.971 Persistent Memory Region: Not Supported 00:12:32.971 Optional Asynchronous Events Supported 00:12:32.971 Namespace Attribute Notices: Supported 00:12:32.971 Firmware Activation Notices: Not Supported 00:12:32.971 ANA Change Notices: Not Supported 00:12:32.971 PLE Aggregate Log Change Notices: Not Supported 00:12:32.971 LBA Status Info Alert Notices: Not Supported 00:12:32.971 EGE Aggregate Log Change Notices: Not Supported 00:12:32.971 Normal NVM Subsystem Shutdown event: Not Supported 00:12:32.971 Zone Descriptor Change Notices: Not Supported 00:12:32.971 Discovery Log Change Notices: Not Supported 00:12:32.971 Controller Attributes 00:12:32.971 128-bit Host Identifier: Supported 00:12:32.971 Non-Operational Permissive Mode: Not Supported 00:12:32.971 NVM Sets: Not Supported 00:12:32.971 Read Recovery Levels: Not Supported 00:12:32.971 Endurance Groups: Not Supported 00:12:32.971 Predictable Latency Mode: Not Supported 00:12:32.971 Traffic Based Keep ALive: Not Supported 00:12:32.971 Namespace Granularity: Not Supported 00:12:32.971 SQ Associations: Not Supported 00:12:32.971 UUID List: Not Supported 00:12:32.971 Multi-Domain Subsystem: Not Supported 00:12:32.971 Fixed Capacity Management: Not Supported 00:12:32.971 Variable Capacity Management: Not Supported 00:12:32.971 Delete Endurance Group: Not Supported 00:12:32.971 Delete NVM Set: Not Supported 00:12:32.971 Extended LBA Formats Supported: Not Supported 00:12:32.971 Flexible Data Placement Supported: Not Supported 00:12:32.971 00:12:32.971 Controller Memory Buffer Support 00:12:32.971 ================================ 00:12:32.971 Supported: No 00:12:32.971 00:12:32.971 Persistent Memory Region Support 00:12:32.971 ================================ 00:12:32.971 Supported: No 00:12:32.971 00:12:32.971 Admin Command Set Attributes 00:12:32.971 ============================ 00:12:32.971 Security Send/Receive: Not Supported 00:12:32.971 Format NVM: Not Supported 00:12:32.971 Firmware Activate/Download: Not Supported 00:12:32.971 Namespace Management: Not Supported 00:12:32.971 Device Self-Test: Not Supported 00:12:32.971 Directives: Not Supported 00:12:32.971 NVMe-MI: Not Supported 00:12:32.971 Virtualization Management: Not Supported 00:12:32.971 Doorbell Buffer Config: Not Supported 00:12:32.971 Get LBA Status Capability: Not Supported 00:12:32.971 Command & Feature Lockdown Capability: Not Supported 00:12:32.971 Abort Command Limit: 4 00:12:32.971 Async Event Request Limit: 4 00:12:32.971 Number of Firmware Slots: N/A 00:12:32.971 Firmware Slot 1 Read-Only: N/A 00:12:32.971 Firmware Activation Without Reset: N/A 00:12:32.971 Multiple Update Detection Support: N/A 00:12:32.971 Firmware Update Granularity: No Information Provided 00:12:32.971 Per-Namespace SMART Log: No 00:12:32.971 Asymmetric Namespace Access Log Page: Not Supported 00:12:32.971 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:32.971 Command Effects Log Page: Supported 00:12:32.971 Get Log Page Extended Data: Supported 00:12:32.971 Telemetry Log Pages: Not Supported 00:12:32.971 Persistent Event Log Pages: Not Supported 00:12:32.971 Supported Log Pages Log Page: May Support 00:12:32.971 Commands Supported & Effects Log Page: Not Supported 00:12:32.971 Feature Identifiers & Effects Log Page:May Support 00:12:32.971 NVMe-MI Commands & Effects Log Page: May Support 00:12:32.971 Data Area 4 for Telemetry Log: Not Supported 00:12:32.971 Error Log Page Entries Supported: 128 00:12:32.971 Keep Alive: Supported 00:12:32.971 Keep Alive Granularity: 10000 ms 00:12:32.971 00:12:32.971 NVM Command Set Attributes 00:12:32.971 ========================== 00:12:32.971 Submission Queue Entry Size 00:12:32.971 Max: 64 00:12:32.971 Min: 64 00:12:32.971 Completion Queue Entry Size 00:12:32.971 Max: 16 00:12:32.971 Min: 16 00:12:32.971 Number of Namespaces: 32 00:12:32.971 Compare Command: Supported 00:12:32.971 Write Uncorrectable Command: Not Supported 00:12:32.971 Dataset Management Command: Supported 00:12:32.971 Write Zeroes Command: Supported 00:12:32.971 Set Features Save Field: Not Supported 00:12:32.971 Reservations: Not Supported 00:12:32.971 Timestamp: Not Supported 00:12:32.971 Copy: Supported 00:12:32.971 Volatile Write Cache: Present 00:12:32.971 Atomic Write Unit (Normal): 1 00:12:32.971 Atomic Write Unit (PFail): 1 00:12:32.971 Atomic Compare & Write Unit: 1 00:12:32.971 Fused Compare & Write: Supported 00:12:32.971 Scatter-Gather List 00:12:32.971 SGL Command Set: Supported (Dword aligned) 00:12:32.971 SGL Keyed: Not Supported 00:12:32.971 SGL Bit Bucket Descriptor: Not Supported 00:12:32.971 SGL Metadata Pointer: Not Supported 00:12:32.971 Oversized SGL: Not Supported 00:12:32.971 SGL Metadata Address: Not Supported 00:12:32.971 SGL Offset: Not Supported 00:12:32.971 Transport SGL Data Block: Not Supported 00:12:32.971 Replay Protected Memory Block: Not Supported 00:12:32.971 00:12:32.971 Firmware Slot Information 00:12:32.971 ========================= 00:12:32.971 Active slot: 1 00:12:32.971 Slot 1 Firmware Revision: 24.05 00:12:32.971 00:12:32.971 00:12:32.971 Commands Supported and Effects 00:12:32.971 ============================== 00:12:32.971 Admin Commands 00:12:32.971 -------------- 00:12:32.971 Get Log Page (02h): Supported 00:12:32.971 Identify (06h): Supported 00:12:32.971 Abort (08h): Supported 00:12:32.971 Set Features (09h): Supported 00:12:32.971 Get Features (0Ah): Supported 00:12:32.971 Asynchronous Event Request (0Ch): Supported 00:12:32.971 Keep Alive (18h): Supported 00:12:32.971 I/O Commands 00:12:32.971 ------------ 00:12:32.971 Flush (00h): Supported LBA-Change 00:12:32.971 Write (01h): Supported LBA-Change 00:12:32.971 Read (02h): Supported 00:12:32.971 Compare (05h): Supported 00:12:32.971 Write Zeroes (08h): Supported LBA-Change 00:12:32.971 Dataset Management (09h): Supported LBA-Change 00:12:32.971 Copy (19h): Supported LBA-Change 00:12:32.971 Unknown (79h): Supported LBA-Change 00:12:32.971 Unknown (7Ah): Supported 00:12:32.971 00:12:32.971 Error Log 00:12:32.971 ========= 00:12:32.971 00:12:32.971 Arbitration 00:12:32.971 =========== 00:12:32.971 Arbitration Burst: 1 00:12:32.971 00:12:32.971 Power Management 00:12:32.971 ================ 00:12:32.971 Number of Power States: 1 00:12:32.971 Current Power State: Power State #0 00:12:32.972 Power State #0: 00:12:32.972 Max Power: 0.00 W 00:12:32.972 Non-Operational State: Operational 00:12:32.972 Entry Latency: Not Reported 00:12:32.972 Exit Latency: Not Reported 00:12:32.972 Relative Read Throughput: 0 00:12:32.972 Relative Read Latency: 0 00:12:32.972 Relative Write Throughput: 0 00:12:32.972 Relative Write Latency: 0 00:12:32.972 Idle Power: Not Reported 00:12:32.972 Active Power: Not Reported 00:12:32.972 Non-Operational Permissive Mode: Not Supported 00:12:32.972 00:12:32.972 Health Information 00:12:32.972 ================== 00:12:32.972 Critical Warnings: 00:12:32.972 Available Spare Space: OK 00:12:32.972 Temperature: OK 00:12:32.972 Device Reliability: OK 00:12:32.972 Read Only: No 00:12:32.972 Volatile Memory Backup: OK 00:12:32.972 Current Temperature: 0 Kelvin (-2[2024-02-14 19:13:10.293199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:32.972 [2024-02-14 19:13:10.293207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:32.972 [2024-02-14 19:13:10.293224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:32.972 [2024-02-14 19:13:10.293236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:32.972 [2024-02-14 19:13:10.293245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:32.972 [2024-02-14 19:13:10.293372] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:32.972 [2024-02-14 19:13:10.293384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:32.972 [2024-02-14 19:13:10.293420] nvme_ctrlr.c:4208:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:32.972 [2024-02-14 19:13:10.293432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.972 [2024-02-14 19:13:10.293440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.972 [2024-02-14 19:13:10.293447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.972 [2024-02-14 19:13:10.293454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.972 [2024-02-14 19:13:10.296517] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:32.972 [2024-02-14 19:13:10.296544] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:32.972 [2024-02-14 19:13:10.297314] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:32.972 [2024-02-14 19:13:10.297330] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:32.972 [2024-02-14 19:13:10.298263] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:32.972 [2024-02-14 19:13:10.298302] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:32.972 [2024-02-14 19:13:10.298492] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:32.972 [2024-02-14 19:13:10.301502] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:32.972 73 Celsius) 00:12:32.972 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:32.972 Available Spare: 0% 00:12:32.972 Available Spare Threshold: 0% 00:12:32.972 Life Percentage Used: 0% 00:12:32.972 Data Units Read: 0 00:12:32.972 Data Units Written: 0 00:12:32.972 Host Read Commands: 0 00:12:32.972 Host Write Commands: 0 00:12:32.972 Controller Busy Time: 0 minutes 00:12:32.972 Power Cycles: 0 00:12:32.972 Power On Hours: 0 hours 00:12:32.972 Unsafe Shutdowns: 0 00:12:32.972 Unrecoverable Media Errors: 0 00:12:32.972 Lifetime Error Log Entries: 0 00:12:32.972 Warning Temperature Time: 0 minutes 00:12:32.972 Critical Temperature Time: 0 minutes 00:12:32.972 00:12:32.972 Number of Queues 00:12:32.972 ================ 00:12:32.972 Number of I/O Submission Queues: 127 00:12:32.972 Number of I/O Completion Queues: 127 00:12:32.972 00:12:32.972 Active Namespaces 00:12:32.972 ================= 00:12:32.972 Namespace ID:1 00:12:32.972 Error Recovery Timeout: Unlimited 00:12:32.972 Command Set Identifier: NVM (00h) 00:12:32.972 Deallocate: Supported 00:12:32.972 Deallocated/Unwritten Error: Not Supported 00:12:32.972 Deallocated Read Value: Unknown 00:12:32.972 Deallocate in Write Zeroes: Not Supported 00:12:32.972 Deallocated Guard Field: 0xFFFF 00:12:32.972 Flush: Supported 00:12:32.972 Reservation: Supported 00:12:32.972 Namespace Sharing Capabilities: Multiple Controllers 00:12:32.972 Size (in LBAs): 131072 (0GiB) 00:12:32.972 Capacity (in LBAs): 131072 (0GiB) 00:12:32.972 Utilization (in LBAs): 131072 (0GiB) 00:12:32.972 NGUID: 83B7C26DF52E4C87AC65595B0D2E5B33 00:12:32.972 UUID: 83b7c26d-f52e-4c87-ac65-595b0d2e5b33 00:12:32.972 Thin Provisioning: Not Supported 00:12:32.972 Per-NS Atomic Units: Yes 00:12:32.972 Atomic Boundary Size (Normal): 0 00:12:32.972 Atomic Boundary Size (PFail): 0 00:12:32.972 Atomic Boundary Offset: 0 00:12:32.972 Maximum Single Source Range Length: 65535 00:12:32.972 Maximum Copy Length: 65535 00:12:32.972 Maximum Source Range Count: 1 00:12:32.972 NGUID/EUI64 Never Reused: No 00:12:32.972 Namespace Write Protected: No 00:12:32.972 Number of LBA Formats: 1 00:12:32.972 Current LBA Format: LBA Format #00 00:12:32.972 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:32.972 00:12:32.972 19:13:10 -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:39.535 Initializing NVMe Controllers 00:12:39.535 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:39.535 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:39.535 Initialization complete. Launching workers. 00:12:39.535 ======================================================== 00:12:39.535 Latency(us) 00:12:39.535 Device Information : IOPS MiB/s Average min max 00:12:39.535 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 30397.96 118.74 4210.18 1191.22 11413.27 00:12:39.535 ======================================================== 00:12:39.535 Total : 30397.96 118.74 4210.18 1191.22 11413.27 00:12:39.535 00:12:39.535 19:13:15 -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:43.721 Initializing NVMe Controllers 00:12:43.721 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:43.721 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:43.721 Initialization complete. Launching workers. 00:12:43.721 ======================================================== 00:12:43.721 Latency(us) 00:12:43.721 Device Information : IOPS MiB/s Average min max 00:12:43.721 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15974.40 62.40 8020.04 6872.55 16021.40 00:12:43.721 ======================================================== 00:12:43.721 Total : 15974.40 62.40 8020.04 6872.55 16021.40 00:12:43.721 00:12:43.721 19:13:21 -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:48.989 Initializing NVMe Controllers 00:12:48.989 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:48.989 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:48.989 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:48.989 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:48.989 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:48.989 Initialization complete. Launching workers. 00:12:48.989 Starting thread on core 2 00:12:48.989 Starting thread on core 3 00:12:48.989 Starting thread on core 1 00:12:48.989 19:13:26 -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:53.212 Initializing NVMe Controllers 00:12:53.212 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:53.212 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:53.212 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:53.212 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:53.212 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:53.212 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:53.212 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:12:53.212 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:53.212 Initialization complete. Launching workers. 00:12:53.212 Starting thread on core 1 with urgent priority queue 00:12:53.212 Starting thread on core 2 with urgent priority queue 00:12:53.212 Starting thread on core 3 with urgent priority queue 00:12:53.212 Starting thread on core 0 with urgent priority queue 00:12:53.212 SPDK bdev Controller (SPDK1 ) core 0: 3000.33 IO/s 33.33 secs/100000 ios 00:12:53.212 SPDK bdev Controller (SPDK1 ) core 1: 3385.33 IO/s 29.54 secs/100000 ios 00:12:53.212 SPDK bdev Controller (SPDK1 ) core 2: 3015.33 IO/s 33.16 secs/100000 ios 00:12:53.212 SPDK bdev Controller (SPDK1 ) core 3: 3170.33 IO/s 31.54 secs/100000 ios 00:12:53.212 ======================================================== 00:12:53.212 00:12:53.212 19:13:29 -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:53.212 Initializing NVMe Controllers 00:12:53.212 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:53.212 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:53.212 Namespace ID: 1 size: 0GB 00:12:53.212 Initialization complete. 00:12:53.212 INFO: using host memory buffer for IO 00:12:53.212 Hello world! 00:12:53.212 19:13:30 -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:54.149 Initializing NVMe Controllers 00:12:54.149 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:54.149 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:54.149 Initialization complete. Launching workers. 00:12:54.149 submit (in ns) avg, min, max = 10397.1, 4058.6, 6017934.1 00:12:54.149 complete (in ns) avg, min, max = 34896.0, 2346.8, 6027405.5 00:12:54.149 00:12:54.149 Submit histogram 00:12:54.149 ================ 00:12:54.149 Range in us Cumulative Count 00:12:54.149 4.044 - 4.073: 0.0410% ( 4) 00:12:54.149 4.073 - 4.102: 0.3176% ( 27) 00:12:54.149 4.102 - 4.131: 0.9322% ( 60) 00:12:54.149 4.131 - 4.160: 1.8336% ( 88) 00:12:54.149 4.160 - 4.189: 3.4112% ( 154) 00:12:54.149 4.189 - 4.218: 6.0131% ( 254) 00:12:54.149 4.218 - 4.247: 11.9955% ( 584) 00:12:54.149 4.247 - 4.276: 20.9998% ( 879) 00:12:54.150 4.276 - 4.305: 31.6738% ( 1042) 00:12:54.150 4.305 - 4.335: 43.5464% ( 1159) 00:12:54.150 4.335 - 4.364: 54.4765% ( 1067) 00:12:54.150 4.364 - 4.393: 63.6550% ( 896) 00:12:54.150 4.393 - 4.422: 71.2252% ( 739) 00:12:54.150 4.422 - 4.451: 76.1729% ( 483) 00:12:54.150 4.451 - 4.480: 79.6865% ( 343) 00:12:54.150 4.480 - 4.509: 82.3909% ( 264) 00:12:54.150 4.509 - 4.538: 84.2553% ( 182) 00:12:54.150 4.538 - 4.567: 86.1504% ( 185) 00:12:54.150 4.567 - 4.596: 87.7894% ( 160) 00:12:54.150 4.596 - 4.625: 89.3055% ( 148) 00:12:54.150 4.625 - 4.655: 90.7089% ( 137) 00:12:54.150 4.655 - 4.684: 92.2250% ( 148) 00:12:54.150 4.684 - 4.713: 93.6591% ( 140) 00:12:54.150 4.713 - 4.742: 94.8679% ( 118) 00:12:54.150 4.742 - 4.771: 95.7283% ( 84) 00:12:54.150 4.771 - 4.800: 96.3635% ( 62) 00:12:54.150 4.800 - 4.829: 96.6503% ( 28) 00:12:54.150 4.829 - 4.858: 96.9576% ( 30) 00:12:54.150 4.858 - 4.887: 97.1830% ( 22) 00:12:54.150 4.887 - 4.916: 97.2956% ( 11) 00:12:54.150 4.916 - 4.945: 97.4288% ( 13) 00:12:54.150 4.945 - 4.975: 97.4903% ( 6) 00:12:54.150 4.975 - 5.004: 97.6337% ( 14) 00:12:54.150 5.004 - 5.033: 97.6849% ( 5) 00:12:54.150 5.033 - 5.062: 97.7976% ( 11) 00:12:54.150 5.062 - 5.091: 97.8795% ( 8) 00:12:54.150 5.091 - 5.120: 97.9820% ( 10) 00:12:54.150 5.120 - 5.149: 98.0229% ( 4) 00:12:54.150 5.149 - 5.178: 98.0742% ( 5) 00:12:54.150 5.178 - 5.207: 98.1151% ( 4) 00:12:54.150 5.207 - 5.236: 98.1356% ( 2) 00:12:54.150 5.236 - 5.265: 98.2381% ( 10) 00:12:54.150 5.265 - 5.295: 98.3507% ( 11) 00:12:54.150 5.295 - 5.324: 98.4327% ( 8) 00:12:54.150 5.324 - 5.353: 98.4942% ( 6) 00:12:54.150 5.353 - 5.382: 98.5761% ( 8) 00:12:54.150 5.382 - 5.411: 98.6683% ( 9) 00:12:54.150 5.411 - 5.440: 98.7195% ( 5) 00:12:54.150 5.440 - 5.469: 98.7810% ( 6) 00:12:54.150 5.469 - 5.498: 98.8322% ( 5) 00:12:54.150 5.498 - 5.527: 98.8732% ( 4) 00:12:54.150 5.527 - 5.556: 98.9142% ( 4) 00:12:54.150 5.556 - 5.585: 98.9551% ( 4) 00:12:54.150 5.585 - 5.615: 99.0268% ( 7) 00:12:54.150 5.615 - 5.644: 99.0371% ( 1) 00:12:54.150 5.673 - 5.702: 99.0678% ( 3) 00:12:54.150 5.702 - 5.731: 99.0883% ( 2) 00:12:54.150 5.731 - 5.760: 99.0985% ( 1) 00:12:54.150 5.760 - 5.789: 99.1088% ( 1) 00:12:54.150 5.789 - 5.818: 99.1190% ( 1) 00:12:54.150 5.847 - 5.876: 99.1293% ( 1) 00:12:54.150 5.876 - 5.905: 99.1498% ( 2) 00:12:54.150 5.905 - 5.935: 99.1600% ( 1) 00:12:54.150 5.935 - 5.964: 99.1805% ( 2) 00:12:54.150 5.964 - 5.993: 99.2010% ( 2) 00:12:54.150 6.022 - 6.051: 99.2112% ( 1) 00:12:54.150 6.080 - 6.109: 99.2215% ( 1) 00:12:54.150 6.109 - 6.138: 99.2317% ( 1) 00:12:54.150 6.138 - 6.167: 99.2420% ( 1) 00:12:54.150 6.255 - 6.284: 99.2522% ( 1) 00:12:54.150 6.400 - 6.429: 99.2624% ( 1) 00:12:54.150 6.458 - 6.487: 99.2727% ( 1) 00:12:54.150 6.487 - 6.516: 99.2829% ( 1) 00:12:54.150 6.691 - 6.720: 99.2932% ( 1) 00:12:54.150 6.749 - 6.778: 99.3137% ( 2) 00:12:54.150 7.215 - 7.244: 99.3239% ( 1) 00:12:54.150 7.389 - 7.418: 99.3342% ( 1) 00:12:54.150 7.447 - 7.505: 99.3444% ( 1) 00:12:54.150 7.622 - 7.680: 99.3546% ( 1) 00:12:54.150 7.738 - 7.796: 99.3649% ( 1) 00:12:54.150 8.204 - 8.262: 99.3751% ( 1) 00:12:54.150 8.844 - 8.902: 99.3854% ( 1) 00:12:54.150 9.425 - 9.484: 99.3956% ( 1) 00:12:54.150 9.484 - 9.542: 99.4059% ( 1) 00:12:54.150 9.949 - 10.007: 99.4263% ( 2) 00:12:54.150 10.705 - 10.764: 99.4468% ( 2) 00:12:54.150 10.764 - 10.822: 99.4571% ( 1) 00:12:54.150 10.880 - 10.938: 99.4673% ( 1) 00:12:54.150 10.938 - 10.996: 99.4776% ( 1) 00:12:54.150 10.996 - 11.055: 99.4878% ( 1) 00:12:54.150 11.055 - 11.113: 99.4981% ( 1) 00:12:54.150 11.113 - 11.171: 99.5185% ( 2) 00:12:54.150 11.171 - 11.229: 99.5390% ( 2) 00:12:54.150 11.345 - 11.404: 99.5493% ( 1) 00:12:54.150 11.462 - 11.520: 99.5595% ( 1) 00:12:54.150 11.636 - 11.695: 99.5698% ( 1) 00:12:54.150 11.695 - 11.753: 99.6107% ( 4) 00:12:54.150 11.753 - 11.811: 99.6210% ( 1) 00:12:54.150 11.811 - 11.869: 99.6415% ( 2) 00:12:54.150 11.869 - 11.927: 99.6517% ( 1) 00:12:54.150 11.985 - 12.044: 99.6824% ( 3) 00:12:54.150 12.044 - 12.102: 99.6927% ( 1) 00:12:54.150 12.160 - 12.218: 99.7029% ( 1) 00:12:54.150 12.335 - 12.393: 99.7132% ( 1) 00:12:54.150 12.451 - 12.509: 99.7337% ( 2) 00:12:54.150 12.567 - 12.625: 99.7439% ( 1) 00:12:54.150 12.916 - 12.975: 99.7541% ( 1) 00:12:54.150 13.440 - 13.498: 99.7644% ( 1) 00:12:54.150 13.673 - 13.731: 99.7746% ( 1) 00:12:54.150 13.847 - 13.905: 99.7849% ( 1) 00:12:54.150 15.942 - 16.058: 99.7951% ( 1) 00:12:54.150 16.175 - 16.291: 99.8054% ( 1) 00:12:54.150 16.524 - 16.640: 99.8259% ( 2) 00:12:54.150 18.385 - 18.502: 99.8361% ( 1) 00:12:54.150 20.480 - 20.596: 99.8463% ( 1) 00:12:54.150 22.342 - 22.458: 99.8566% ( 1) 00:12:54.150 3991.738 - 4021.527: 99.9795% ( 12) 00:12:54.150 4021.527 - 4051.316: 99.9898% ( 1) 00:12:54.150 6017.396 - 6047.185: 100.0000% ( 1) 00:12:54.150 00:12:54.150 Complete histogram 00:12:54.150 ================== 00:12:54.150 Range in us Cumulative Count 00:12:54.150 2.342 - 2.356: 0.8502% ( 83) 00:12:54.150 2.356 - 2.371: 13.4604% ( 1231) 00:12:54.150 2.371 - 2.385: 38.6704% ( 2461) 00:12:54.150 2.385 - 2.400: 63.2862% ( 2403) 00:12:54.150 2.400 - 2.415: 76.8183% ( 1321) 00:12:54.150 2.415 - 2.429: 82.0631% ( 512) 00:12:54.150 2.429 - 2.444: 84.0402% ( 193) 00:12:54.150 2.444 - 2.458: 85.8431% ( 176) 00:12:54.150 2.458 - 2.473: 88.4143% ( 251) 00:12:54.150 2.473 - 2.487: 91.0879% ( 261) 00:12:54.150 2.487 - 2.502: 92.5118% ( 139) 00:12:54.150 2.502 - 2.516: 93.3313% ( 80) 00:12:54.150 2.516 - 2.531: 93.7820% ( 44) 00:12:54.150 2.531 - 2.545: 94.3659% ( 57) 00:12:54.150 2.545 - 2.560: 94.7552% ( 38) 00:12:54.150 2.560 - 2.575: 95.1137% ( 35) 00:12:54.150 2.575 - 2.589: 95.4005% ( 28) 00:12:54.150 2.589 - 2.604: 95.7078% ( 30) 00:12:54.150 2.604 - 2.618: 95.9947% ( 28) 00:12:54.150 2.618 - 2.633: 96.2713% ( 27) 00:12:54.150 2.633 - 2.647: 96.5786% ( 30) 00:12:54.150 2.647 - 2.662: 96.8142% ( 23) 00:12:54.150 2.662 - 2.676: 97.0600% ( 24) 00:12:54.150 2.676 - 2.691: 97.2547% ( 19) 00:12:54.150 2.691 - 2.705: 97.4186% ( 16) 00:12:54.150 2.705 - 2.720: 97.6439% ( 22) 00:12:54.150 2.720 - 2.735: 97.8795% ( 23) 00:12:54.150 2.735 - 2.749: 98.0844% ( 20) 00:12:54.150 2.749 - 2.764: 98.2176% ( 13) 00:12:54.150 2.764 - 2.778: 98.3200% ( 10) 00:12:54.150 2.778 - 2.793: 98.3712% ( 5) 00:12:54.150 2.793 - 2.807: 98.4429% ( 7) 00:12:54.150 2.807 - 2.822: 98.5146% ( 7) 00:12:54.150 2.822 - 2.836: 98.5351% ( 2) 00:12:54.150 2.836 - 2.851: 98.5556% ( 2) 00:12:54.150 2.880 - 2.895: 98.5659% ( 1) 00:12:54.150 2.909 - 2.924: 98.5761% ( 1) 00:12:54.150 2.924 - 2.938: 98.5864% ( 1) 00:12:54.150 3.433 - 3.447: 98.5966% ( 1) 00:12:54.150 3.593 - 3.607: 98.6068% ( 1) 00:12:54.150 3.753 - 3.782: 98.6171% ( 1) 00:12:54.150 3.782 - 3.811: 98.6273% ( 1) 00:12:54.150 3.811 - 3.840: 98.6376% ( 1) 00:12:54.150 3.869 - 3.898: 98.6581% ( 2) 00:12:54.150 3.898 - 3.927: 98.6683% ( 1) 00:12:54.150 3.927 - 3.956: 98.6785% ( 1) 00:12:54.150 3.956 - 3.985: 98.6888% ( 1) 00:12:54.150 4.015 - 4.044: 98.7093% ( 2) 00:12:54.150 4.044 - 4.073: 98.7503% ( 4) 00:12:54.150 4.073 - 4.102: 98.7605% ( 1) 00:12:54.150 4.102 - 4.131: 98.7707% ( 1) 00:12:54.150 4.160 - 4.189: 98.8015% ( 3) 00:12:54.150 4.189 - 4.218: 98.8117% ( 1) 00:12:54.150 4.276 - 4.305: 98.8220% ( 1) 00:12:54.150 4.305 - 4.335: 98.8322% ( 1) 00:12:54.150 4.335 - 4.364: 98.8527% ( 2) 00:12:54.150 4.422 - 4.451: 98.8629% ( 1) 00:12:54.150 4.480 - 4.509: 98.8732% ( 1) 00:12:54.150 4.625 - 4.655: 98.8937% ( 2) 00:12:54.150 4.655 - 4.684: 98.9142% ( 2) 00:12:54.150 4.829 - 4.858: 98.9244% ( 1) 00:12:54.150 4.945 - 4.975: 98.9346% ( 1) 00:12:54.150 5.062 - 5.091: 98.9449% ( 1) 00:12:54.150 5.120 - 5.149: 98.9551% ( 1) 00:12:54.150 7.855 - 7.913: 98.9654% ( 1) 00:12:54.150 7.971 - 8.029: 98.9756% ( 1) 00:12:54.150 8.611 - 8.669: 98.9859% ( 1) 00:12:54.150 8.727 - 8.785: 98.9961% ( 1) 00:12:54.150 8.844 - 8.902: 99.0064% ( 1) 00:12:54.150 8.902 - 8.960: 99.0268% ( 2) 00:12:54.150 8.960 - 9.018: 99.0576% ( 3) 00:12:54.151 9.076 - 9.135: 99.0678% ( 1) 00:12:54.151 9.193 - 9.251: 99.0781% ( 1) 00:12:54.151 9.367 - 9.425: 99.0883% ( 1) 00:12:54.151 9.425 - 9.484: 99.1190% ( 3) 00:12:54.151 9.542 - 9.600: 99.1293% ( 1) 00:12:54.151 9.658 - 9.716: 99.1395% ( 1) 00:12:54.151 9.949 - 10.007: 99.1498% ( 1) 00:12:54.151 11.462 - 11.520: 99.1600% ( 1) 00:12:54.151 11.927 - 11.985: 99.1703% ( 1) 00:12:54.151 13.905 - 13.964: 99.1805% ( 1) 00:12:54.151 17.455 - 17.571: 99.1907% ( 1) 00:12:54.151 18.385 - 18.502: 99.2010% ( 1) 00:12:54.151 2025.658 - 2040.553: 99.2112% ( 1) 00:12:54.151 3991.738 - 4021.527: 99.9693% ( 74) 00:12:54.151 6017.396 - 6047.185: 100.0000% ( 3) 00:12:54.151 00:12:54.151 19:13:31 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:54.151 19:13:31 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:54.151 19:13:31 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:54.151 19:13:31 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:54.151 19:13:31 -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:54.410 [2024-02-14 19:13:31.765198] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:12:54.410 [ 00:12:54.410 { 00:12:54.410 "allow_any_host": true, 00:12:54.410 "hosts": [], 00:12:54.410 "listen_addresses": [], 00:12:54.410 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:54.410 "subtype": "Discovery" 00:12:54.410 }, 00:12:54.410 { 00:12:54.410 "allow_any_host": true, 00:12:54.410 "hosts": [], 00:12:54.410 "listen_addresses": [ 00:12:54.410 { 00:12:54.410 "adrfam": "IPv4", 00:12:54.410 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:54.410 "transport": "VFIOUSER", 00:12:54.410 "trsvcid": "0", 00:12:54.410 "trtype": "VFIOUSER" 00:12:54.410 } 00:12:54.410 ], 00:12:54.410 "max_cntlid": 65519, 00:12:54.410 "max_namespaces": 32, 00:12:54.410 "min_cntlid": 1, 00:12:54.410 "model_number": "SPDK bdev Controller", 00:12:54.410 "namespaces": [ 00:12:54.410 { 00:12:54.410 "bdev_name": "Malloc1", 00:12:54.410 "name": "Malloc1", 00:12:54.410 "nguid": "83B7C26DF52E4C87AC65595B0D2E5B33", 00:12:54.410 "nsid": 1, 00:12:54.410 "uuid": "83b7c26d-f52e-4c87-ac65-595b0d2e5b33" 00:12:54.410 } 00:12:54.410 ], 00:12:54.410 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:54.410 "serial_number": "SPDK1", 00:12:54.410 "subtype": "NVMe" 00:12:54.410 }, 00:12:54.410 { 00:12:54.410 "allow_any_host": true, 00:12:54.410 "hosts": [], 00:12:54.410 "listen_addresses": [ 00:12:54.410 { 00:12:54.410 "adrfam": "IPv4", 00:12:54.410 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:54.410 "transport": "VFIOUSER", 00:12:54.410 "trsvcid": "0", 00:12:54.410 "trtype": "VFIOUSER" 00:12:54.410 } 00:12:54.410 ], 00:12:54.410 "max_cntlid": 65519, 00:12:54.410 "max_namespaces": 32, 00:12:54.410 "min_cntlid": 1, 00:12:54.410 "model_number": "SPDK bdev Controller", 00:12:54.410 "namespaces": [ 00:12:54.410 { 00:12:54.410 "bdev_name": "Malloc2", 00:12:54.410 "name": "Malloc2", 00:12:54.410 "nguid": "8A891366B6604FE28D842A64F92A5BBD", 00:12:54.410 "nsid": 1, 00:12:54.410 "uuid": "8a891366-b660-4fe2-8d84-2a64f92a5bbd" 00:12:54.410 } 00:12:54.410 ], 00:12:54.410 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:54.410 "serial_number": "SPDK2", 00:12:54.410 "subtype": "NVMe" 00:12:54.410 } 00:12:54.410 ] 00:12:54.410 19:13:31 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:54.410 19:13:31 -- target/nvmf_vfio_user.sh@34 -- # aerpid=69875 00:12:54.410 19:13:31 -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:54.410 19:13:31 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:54.410 19:13:31 -- common/autotest_common.sh@1242 -- # local i=0 00:12:54.410 19:13:31 -- common/autotest_common.sh@1243 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:54.410 19:13:31 -- common/autotest_common.sh@1244 -- # '[' 0 -lt 200 ']' 00:12:54.410 19:13:31 -- common/autotest_common.sh@1245 -- # i=1 00:12:54.410 19:13:31 -- common/autotest_common.sh@1246 -- # sleep 0.1 00:12:54.669 19:13:31 -- common/autotest_common.sh@1243 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:54.669 19:13:31 -- common/autotest_common.sh@1244 -- # '[' 1 -lt 200 ']' 00:12:54.669 19:13:31 -- common/autotest_common.sh@1245 -- # i=2 00:12:54.669 19:13:31 -- common/autotest_common.sh@1246 -- # sleep 0.1 00:12:54.669 19:13:31 -- common/autotest_common.sh@1243 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:54.669 19:13:31 -- common/autotest_common.sh@1244 -- # '[' 2 -lt 200 ']' 00:12:54.669 19:13:31 -- common/autotest_common.sh@1245 -- # i=3 00:12:54.669 19:13:31 -- common/autotest_common.sh@1246 -- # sleep 0.1 00:12:54.927 19:13:32 -- common/autotest_common.sh@1243 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:54.927 19:13:32 -- common/autotest_common.sh@1249 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:54.927 19:13:32 -- common/autotest_common.sh@1253 -- # return 0 00:12:54.927 19:13:32 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:54.927 19:13:32 -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:55.185 Malloc3 00:12:55.185 19:13:32 -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:55.443 19:13:32 -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:55.443 Asynchronous Event Request test 00:12:55.443 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:55.443 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:55.443 Registering asynchronous event callbacks... 00:12:55.443 Starting namespace attribute notice tests for all controllers... 00:12:55.443 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:55.443 aer_cb - Changed Namespace 00:12:55.443 Cleaning up... 00:12:55.702 [ 00:12:55.702 { 00:12:55.702 "allow_any_host": true, 00:12:55.702 "hosts": [], 00:12:55.702 "listen_addresses": [], 00:12:55.702 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:55.702 "subtype": "Discovery" 00:12:55.702 }, 00:12:55.702 { 00:12:55.702 "allow_any_host": true, 00:12:55.702 "hosts": [], 00:12:55.702 "listen_addresses": [ 00:12:55.702 { 00:12:55.702 "adrfam": "IPv4", 00:12:55.702 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:55.702 "transport": "VFIOUSER", 00:12:55.702 "trsvcid": "0", 00:12:55.702 "trtype": "VFIOUSER" 00:12:55.702 } 00:12:55.702 ], 00:12:55.702 "max_cntlid": 65519, 00:12:55.702 "max_namespaces": 32, 00:12:55.702 "min_cntlid": 1, 00:12:55.702 "model_number": "SPDK bdev Controller", 00:12:55.702 "namespaces": [ 00:12:55.702 { 00:12:55.702 "bdev_name": "Malloc1", 00:12:55.702 "name": "Malloc1", 00:12:55.702 "nguid": "83B7C26DF52E4C87AC65595B0D2E5B33", 00:12:55.702 "nsid": 1, 00:12:55.702 "uuid": "83b7c26d-f52e-4c87-ac65-595b0d2e5b33" 00:12:55.702 }, 00:12:55.702 { 00:12:55.702 "bdev_name": "Malloc3", 00:12:55.702 "name": "Malloc3", 00:12:55.702 "nguid": "317767F53998452CA2532A5AA0460909", 00:12:55.702 "nsid": 2, 00:12:55.702 "uuid": "317767f5-3998-452c-a253-2a5aa0460909" 00:12:55.702 } 00:12:55.702 ], 00:12:55.702 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:55.702 "serial_number": "SPDK1", 00:12:55.702 "subtype": "NVMe" 00:12:55.702 }, 00:12:55.702 { 00:12:55.702 "allow_any_host": true, 00:12:55.702 "hosts": [], 00:12:55.702 "listen_addresses": [ 00:12:55.702 { 00:12:55.702 "adrfam": "IPv4", 00:12:55.702 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:55.702 "transport": "VFIOUSER", 00:12:55.702 "trsvcid": "0", 00:12:55.702 "trtype": "VFIOUSER" 00:12:55.702 } 00:12:55.702 ], 00:12:55.702 "max_cntlid": 65519, 00:12:55.702 "max_namespaces": 32, 00:12:55.702 "min_cntlid": 1, 00:12:55.702 "model_number": "SPDK bdev Controller", 00:12:55.702 "namespaces": [ 00:12:55.702 { 00:12:55.702 "bdev_name": "Malloc2", 00:12:55.702 "name": "Malloc2", 00:12:55.702 "nguid": "8A891366B6604FE28D842A64F92A5BBD", 00:12:55.702 "nsid": 1, 00:12:55.702 "uuid": "8a891366-b660-4fe2-8d84-2a64f92a5bbd" 00:12:55.702 } 00:12:55.702 ], 00:12:55.702 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:55.702 "serial_number": "SPDK2", 00:12:55.702 "subtype": "NVMe" 00:12:55.702 } 00:12:55.702 ] 00:12:55.702 19:13:32 -- target/nvmf_vfio_user.sh@44 -- # wait 69875 00:12:55.702 19:13:32 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:55.702 19:13:32 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:55.702 19:13:32 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:55.702 19:13:32 -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:55.702 [2024-02-14 19:13:32.984024] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:12:55.702 [2024-02-14 19:13:32.984072] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69913 ] 00:12:55.962 [2024-02-14 19:13:33.119194] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:55.962 [2024-02-14 19:13:33.132179] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:55.962 [2024-02-14 19:13:33.132219] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fbfa32f2000 00:12:55.962 [2024-02-14 19:13:33.133185] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:55.962 [2024-02-14 19:13:33.134185] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:55.962 [2024-02-14 19:13:33.135187] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:55.963 [2024-02-14 19:13:33.136201] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:55.963 [2024-02-14 19:13:33.137204] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:55.963 [2024-02-14 19:13:33.138219] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:55.963 [2024-02-14 19:13:33.139224] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:55.963 [2024-02-14 19:13:33.140233] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:55.963 [2024-02-14 19:13:33.141246] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:55.963 [2024-02-14 19:13:33.141279] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fbfa2915000 00:12:55.963 [2024-02-14 19:13:33.142527] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:55.963 [2024-02-14 19:13:33.158061] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:55.963 [2024-02-14 19:13:33.158105] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:55.963 [2024-02-14 19:13:33.163223] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:55.963 [2024-02-14 19:13:33.163298] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:55.963 [2024-02-14 19:13:33.163407] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:55.963 [2024-02-14 19:13:33.163438] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:55.963 [2024-02-14 19:13:33.163445] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:55.963 [2024-02-14 19:13:33.164231] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:55.963 [2024-02-14 19:13:33.164264] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:55.963 [2024-02-14 19:13:33.164277] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:55.963 [2024-02-14 19:13:33.165249] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:55.963 [2024-02-14 19:13:33.165275] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:55.963 [2024-02-14 19:13:33.165288] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:55.963 [2024-02-14 19:13:33.166249] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:55.963 [2024-02-14 19:13:33.166276] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:55.963 [2024-02-14 19:13:33.167252] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:55.963 [2024-02-14 19:13:33.167279] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:55.963 [2024-02-14 19:13:33.167291] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:55.963 [2024-02-14 19:13:33.167301] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:55.963 [2024-02-14 19:13:33.167407] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:55.963 [2024-02-14 19:13:33.167413] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:55.963 [2024-02-14 19:13:33.167420] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:55.963 [2024-02-14 19:13:33.168263] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:55.963 [2024-02-14 19:13:33.169271] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:55.963 [2024-02-14 19:13:33.170279] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:55.963 [2024-02-14 19:13:33.171330] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:55.963 [2024-02-14 19:13:33.172286] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:55.963 [2024-02-14 19:13:33.172311] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:55.963 [2024-02-14 19:13:33.172319] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:55.963 [2024-02-14 19:13:33.172341] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:55.963 [2024-02-14 19:13:33.172353] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:55.963 [2024-02-14 19:13:33.172378] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:55.963 [2024-02-14 19:13:33.172385] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:55.963 [2024-02-14 19:13:33.172402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:55.963 [2024-02-14 19:13:33.176508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:55.963 [2024-02-14 19:13:33.176538] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:55.963 [2024-02-14 19:13:33.176550] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:55.963 [2024-02-14 19:13:33.176556] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:55.963 [2024-02-14 19:13:33.176562] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:55.963 [2024-02-14 19:13:33.176568] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:55.963 [2024-02-14 19:13:33.176573] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:55.963 [2024-02-14 19:13:33.176580] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:55.963 [2024-02-14 19:13:33.176595] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:55.963 [2024-02-14 19:13:33.176609] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:55.963 [2024-02-14 19:13:33.184527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:55.963 [2024-02-14 19:13:33.184562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:55.963 [2024-02-14 19:13:33.184575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:55.963 [2024-02-14 19:13:33.184585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:55.963 [2024-02-14 19:13:33.184595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:55.963 [2024-02-14 19:13:33.184602] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:55.963 [2024-02-14 19:13:33.184616] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:55.963 [2024-02-14 19:13:33.184628] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:55.963 [2024-02-14 19:13:33.192504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:55.963 [2024-02-14 19:13:33.192525] nvme_ctrlr.c:2877:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:55.963 [2024-02-14 19:13:33.192533] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:55.963 [2024-02-14 19:13:33.192568] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:55.963 [2024-02-14 19:13:33.192576] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:55.963 [2024-02-14 19:13:33.192588] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:55.963 [2024-02-14 19:13:33.200500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:55.963 [2024-02-14 19:13:33.200569] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:55.963 [2024-02-14 19:13:33.200583] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:55.963 [2024-02-14 19:13:33.200595] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:55.963 [2024-02-14 19:13:33.200601] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:55.963 [2024-02-14 19:13:33.200609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:55.963 [2024-02-14 19:13:33.208502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:55.963 [2024-02-14 19:13:33.208539] nvme_ctrlr.c:4544:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:55.963 [2024-02-14 19:13:33.208556] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:55.963 [2024-02-14 19:13:33.208568] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:55.963 [2024-02-14 19:13:33.208578] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:55.963 [2024-02-14 19:13:33.208584] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:55.963 [2024-02-14 19:13:33.208592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:55.964 [2024-02-14 19:13:33.216504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:55.964 [2024-02-14 19:13:33.216541] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:55.964 [2024-02-14 19:13:33.216554] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:55.964 [2024-02-14 19:13:33.216565] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:55.964 [2024-02-14 19:13:33.216571] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:55.964 [2024-02-14 19:13:33.216578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:55.964 [2024-02-14 19:13:33.224501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:55.964 [2024-02-14 19:13:33.224527] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:55.964 [2024-02-14 19:13:33.224538] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:55.964 [2024-02-14 19:13:33.224551] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:55.964 [2024-02-14 19:13:33.224559] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:55.964 [2024-02-14 19:13:33.224565] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:55.964 [2024-02-14 19:13:33.224571] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:55.964 [2024-02-14 19:13:33.224577] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:55.964 [2024-02-14 19:13:33.224582] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:55.964 [2024-02-14 19:13:33.224610] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:55.964 [2024-02-14 19:13:33.232503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:55.964 [2024-02-14 19:13:33.232535] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:55.964 [2024-02-14 19:13:33.240503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:55.964 [2024-02-14 19:13:33.240533] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:55.964 [2024-02-14 19:13:33.248503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:55.964 [2024-02-14 19:13:33.248532] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:55.964 [2024-02-14 19:13:33.256503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:55.964 [2024-02-14 19:13:33.256535] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:55.964 [2024-02-14 19:13:33.256542] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:55.964 [2024-02-14 19:13:33.256547] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:55.964 [2024-02-14 19:13:33.256551] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:55.964 [2024-02-14 19:13:33.256559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:55.964 [2024-02-14 19:13:33.256568] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:55.964 [2024-02-14 19:13:33.256573] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:55.964 [2024-02-14 19:13:33.256580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:55.964 [2024-02-14 19:13:33.256589] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:55.964 [2024-02-14 19:13:33.256594] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:55.964 [2024-02-14 19:13:33.256601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:55.964 [2024-02-14 19:13:33.256610] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:55.964 [2024-02-14 19:13:33.256615] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:55.964 [2024-02-14 19:13:33.256622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:55.964 [2024-02-14 19:13:33.264502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:55.964 [2024-02-14 19:13:33.264540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:55.964 [2024-02-14 19:13:33.264554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:55.964 [2024-02-14 19:13:33.264563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:55.964 ===================================================== 00:12:55.964 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:55.964 ===================================================== 00:12:55.964 Controller Capabilities/Features 00:12:55.964 ================================ 00:12:55.964 Vendor ID: 4e58 00:12:55.964 Subsystem Vendor ID: 4e58 00:12:55.964 Serial Number: SPDK2 00:12:55.964 Model Number: SPDK bdev Controller 00:12:55.964 Firmware Version: 24.05 00:12:55.964 Recommended Arb Burst: 6 00:12:55.964 IEEE OUI Identifier: 8d 6b 50 00:12:55.964 Multi-path I/O 00:12:55.964 May have multiple subsystem ports: Yes 00:12:55.964 May have multiple controllers: Yes 00:12:55.964 Associated with SR-IOV VF: No 00:12:55.964 Max Data Transfer Size: 131072 00:12:55.964 Max Number of Namespaces: 32 00:12:55.964 Max Number of I/O Queues: 127 00:12:55.964 NVMe Specification Version (VS): 1.3 00:12:55.964 NVMe Specification Version (Identify): 1.3 00:12:55.964 Maximum Queue Entries: 256 00:12:55.964 Contiguous Queues Required: Yes 00:12:55.964 Arbitration Mechanisms Supported 00:12:55.964 Weighted Round Robin: Not Supported 00:12:55.964 Vendor Specific: Not Supported 00:12:55.964 Reset Timeout: 15000 ms 00:12:55.964 Doorbell Stride: 4 bytes 00:12:55.964 NVM Subsystem Reset: Not Supported 00:12:55.964 Command Sets Supported 00:12:55.964 NVM Command Set: Supported 00:12:55.964 Boot Partition: Not Supported 00:12:55.964 Memory Page Size Minimum: 4096 bytes 00:12:55.964 Memory Page Size Maximum: 4096 bytes 00:12:55.964 Persistent Memory Region: Not Supported 00:12:55.964 Optional Asynchronous Events Supported 00:12:55.964 Namespace Attribute Notices: Supported 00:12:55.964 Firmware Activation Notices: Not Supported 00:12:55.964 ANA Change Notices: Not Supported 00:12:55.964 PLE Aggregate Log Change Notices: Not Supported 00:12:55.964 LBA Status Info Alert Notices: Not Supported 00:12:55.964 EGE Aggregate Log Change Notices: Not Supported 00:12:55.964 Normal NVM Subsystem Shutdown event: Not Supported 00:12:55.964 Zone Descriptor Change Notices: Not Supported 00:12:55.964 Discovery Log Change Notices: Not Supported 00:12:55.964 Controller Attributes 00:12:55.964 128-bit Host Identifier: Supported 00:12:55.964 Non-Operational Permissive Mode: Not Supported 00:12:55.964 NVM Sets: Not Supported 00:12:55.964 Read Recovery Levels: Not Supported 00:12:55.964 Endurance Groups: Not Supported 00:12:55.964 Predictable Latency Mode: Not Supported 00:12:55.964 Traffic Based Keep ALive: Not Supported 00:12:55.964 Namespace Granularity: Not Supported 00:12:55.964 SQ Associations: Not Supported 00:12:55.964 UUID List: Not Supported 00:12:55.964 Multi-Domain Subsystem: Not Supported 00:12:55.964 Fixed Capacity Management: Not Supported 00:12:55.964 Variable Capacity Management: Not Supported 00:12:55.964 Delete Endurance Group: Not Supported 00:12:55.964 Delete NVM Set: Not Supported 00:12:55.964 Extended LBA Formats Supported: Not Supported 00:12:55.964 Flexible Data Placement Supported: Not Supported 00:12:55.964 00:12:55.964 Controller Memory Buffer Support 00:12:55.964 ================================ 00:12:55.964 Supported: No 00:12:55.964 00:12:55.964 Persistent Memory Region Support 00:12:55.964 ================================ 00:12:55.964 Supported: No 00:12:55.964 00:12:55.964 Admin Command Set Attributes 00:12:55.964 ============================ 00:12:55.964 Security Send/Receive: Not Supported 00:12:55.964 Format NVM: Not Supported 00:12:55.964 Firmware Activate/Download: Not Supported 00:12:55.964 Namespace Management: Not Supported 00:12:55.964 Device Self-Test: Not Supported 00:12:55.964 Directives: Not Supported 00:12:55.964 NVMe-MI: Not Supported 00:12:55.964 Virtualization Management: Not Supported 00:12:55.964 Doorbell Buffer Config: Not Supported 00:12:55.964 Get LBA Status Capability: Not Supported 00:12:55.964 Command & Feature Lockdown Capability: Not Supported 00:12:55.964 Abort Command Limit: 4 00:12:55.964 Async Event Request Limit: 4 00:12:55.964 Number of Firmware Slots: N/A 00:12:55.964 Firmware Slot 1 Read-Only: N/A 00:12:55.964 Firmware Activation Without Reset: N/A 00:12:55.964 Multiple Update Detection Support: N/A 00:12:55.964 Firmware Update Granularity: No Information Provided 00:12:55.965 Per-Namespace SMART Log: No 00:12:55.965 Asymmetric Namespace Access Log Page: Not Supported 00:12:55.965 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:55.965 Command Effects Log Page: Supported 00:12:55.965 Get Log Page Extended Data: Supported 00:12:55.965 Telemetry Log Pages: Not Supported 00:12:55.965 Persistent Event Log Pages: Not Supported 00:12:55.965 Supported Log Pages Log Page: May Support 00:12:55.965 Commands Supported & Effects Log Page: Not Supported 00:12:55.965 Feature Identifiers & Effects Log Page:May Support 00:12:55.965 NVMe-MI Commands & Effects Log Page: May Support 00:12:55.965 Data Area 4 for Telemetry Log: Not Supported 00:12:55.965 Error Log Page Entries Supported: 128 00:12:55.965 Keep Alive: Supported 00:12:55.965 Keep Alive Granularity: 10000 ms 00:12:55.965 00:12:55.965 NVM Command Set Attributes 00:12:55.965 ========================== 00:12:55.965 Submission Queue Entry Size 00:12:55.965 Max: 64 00:12:55.965 Min: 64 00:12:55.965 Completion Queue Entry Size 00:12:55.965 Max: 16 00:12:55.965 Min: 16 00:12:55.965 Number of Namespaces: 32 00:12:55.965 Compare Command: Supported 00:12:55.965 Write Uncorrectable Command: Not Supported 00:12:55.965 Dataset Management Command: Supported 00:12:55.965 Write Zeroes Command: Supported 00:12:55.965 Set Features Save Field: Not Supported 00:12:55.965 Reservations: Not Supported 00:12:55.965 Timestamp: Not Supported 00:12:55.965 Copy: Supported 00:12:55.965 Volatile Write Cache: Present 00:12:55.965 Atomic Write Unit (Normal): 1 00:12:55.965 Atomic Write Unit (PFail): 1 00:12:55.965 Atomic Compare & Write Unit: 1 00:12:55.965 Fused Compare & Write: Supported 00:12:55.965 Scatter-Gather List 00:12:55.965 SGL Command Set: Supported (Dword aligned) 00:12:55.965 SGL Keyed: Not Supported 00:12:55.965 SGL Bit Bucket Descriptor: Not Supported 00:12:55.965 SGL Metadata Pointer: Not Supported 00:12:55.965 Oversized SGL: Not Supported 00:12:55.965 SGL Metadata Address: Not Supported 00:12:55.965 SGL Offset: Not Supported 00:12:55.965 Transport SGL Data Block: Not Supported 00:12:55.965 Replay Protected Memory Block: Not Supported 00:12:55.965 00:12:55.965 Firmware Slot Information 00:12:55.965 ========================= 00:12:55.965 Active slot: 1 00:12:55.965 Slot 1 Firmware Revision: 24.05 00:12:55.965 00:12:55.965 00:12:55.965 Commands Supported and Effects 00:12:55.965 ============================== 00:12:55.965 Admin Commands 00:12:55.965 -------------- 00:12:55.965 Get Log Page (02h): Supported 00:12:55.965 Identify (06h): Supported 00:12:55.965 Abort (08h): Supported 00:12:55.965 Set Features (09h): Supported 00:12:55.965 Get Features (0Ah): Supported 00:12:55.965 Asynchronous Event Request (0Ch): Supported 00:12:55.965 Keep Alive (18h): Supported 00:12:55.965 I/O Commands 00:12:55.965 ------------ 00:12:55.965 Flush (00h): Supported LBA-Change 00:12:55.965 Write (01h): Supported LBA-Change 00:12:55.965 Read (02h): Supported 00:12:55.965 Compare (05h): Supported 00:12:55.965 Write Zeroes (08h): Supported LBA-Change 00:12:55.965 Dataset Management (09h): Supported LBA-Change 00:12:55.965 Copy (19h): Supported LBA-Change 00:12:55.965 Unknown (79h): Supported LBA-Change 00:12:55.965 Unknown (7Ah): Supported 00:12:55.965 00:12:55.965 Error Log 00:12:55.965 ========= 00:12:55.965 00:12:55.965 Arbitration 00:12:55.965 =========== 00:12:55.965 Arbitration Burst: 1 00:12:55.965 00:12:55.965 Power Management 00:12:55.965 ================ 00:12:55.965 Number of Power States: 1 00:12:55.965 Current Power State: Power State #0 00:12:55.965 Power State #0: 00:12:55.965 Max Power: 0.00 W 00:12:55.965 Non-Operational State: Operational 00:12:55.965 Entry Latency: Not Reported 00:12:55.965 Exit Latency: Not Reported 00:12:55.965 Relative Read Throughput: 0 00:12:55.965 Relative Read Latency: 0 00:12:55.965 Relative Write Throughput: 0 00:12:55.965 Relative Write Latency: 0 00:12:55.965 Idle Power: Not Reported 00:12:55.965 Active Power: Not Reported 00:12:55.965 Non-Operational Permissive Mode: Not Supported 00:12:55.965 00:12:55.965 Health Information 00:12:55.965 ================== 00:12:55.965 Critical Warnings: 00:12:55.965 Available Spare Space: OK 00:12:55.965 Temperature: OK 00:12:55.965 Device Reliability: OK 00:12:55.965 Read Only: No 00:12:55.965 Volatile Memory Backup: OK 00:12:55.965 Current Temperature: 0 Kelvin (-2[2024-02-14 19:13:33.264706] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:55.965 [2024-02-14 19:13:33.272507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:55.965 [2024-02-14 19:13:33.272565] nvme_ctrlr.c:4208:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:55.965 [2024-02-14 19:13:33.272581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.965 [2024-02-14 19:13:33.272589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.965 [2024-02-14 19:13:33.272597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.965 [2024-02-14 19:13:33.272605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.965 [2024-02-14 19:13:33.276505] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:55.965 [2024-02-14 19:13:33.276535] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:55.965 [2024-02-14 19:13:33.276754] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:55.965 [2024-02-14 19:13:33.276766] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:55.965 [2024-02-14 19:13:33.277706] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:55.965 [2024-02-14 19:13:33.277738] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:55.965 [2024-02-14 19:13:33.278004] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:55.965 [2024-02-14 19:13:33.279378] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:55.965 73 Celsius) 00:12:55.965 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:55.965 Available Spare: 0% 00:12:55.965 Available Spare Threshold: 0% 00:12:55.965 Life Percentage Used: 0% 00:12:55.965 Data Units Read: 0 00:12:55.965 Data Units Written: 0 00:12:55.965 Host Read Commands: 0 00:12:55.965 Host Write Commands: 0 00:12:55.965 Controller Busy Time: 0 minutes 00:12:55.965 Power Cycles: 0 00:12:55.965 Power On Hours: 0 hours 00:12:55.965 Unsafe Shutdowns: 0 00:12:55.965 Unrecoverable Media Errors: 0 00:12:55.965 Lifetime Error Log Entries: 0 00:12:55.965 Warning Temperature Time: 0 minutes 00:12:55.965 Critical Temperature Time: 0 minutes 00:12:55.965 00:12:55.965 Number of Queues 00:12:55.965 ================ 00:12:55.965 Number of I/O Submission Queues: 127 00:12:55.965 Number of I/O Completion Queues: 127 00:12:55.965 00:12:55.965 Active Namespaces 00:12:55.965 ================= 00:12:55.965 Namespace ID:1 00:12:55.965 Error Recovery Timeout: Unlimited 00:12:55.965 Command Set Identifier: NVM (00h) 00:12:55.965 Deallocate: Supported 00:12:55.965 Deallocated/Unwritten Error: Not Supported 00:12:55.965 Deallocated Read Value: Unknown 00:12:55.965 Deallocate in Write Zeroes: Not Supported 00:12:55.965 Deallocated Guard Field: 0xFFFF 00:12:55.965 Flush: Supported 00:12:55.965 Reservation: Supported 00:12:55.965 Namespace Sharing Capabilities: Multiple Controllers 00:12:55.965 Size (in LBAs): 131072 (0GiB) 00:12:55.965 Capacity (in LBAs): 131072 (0GiB) 00:12:55.965 Utilization (in LBAs): 131072 (0GiB) 00:12:55.965 NGUID: 8A891366B6604FE28D842A64F92A5BBD 00:12:55.965 UUID: 8a891366-b660-4fe2-8d84-2a64f92a5bbd 00:12:55.965 Thin Provisioning: Not Supported 00:12:55.965 Per-NS Atomic Units: Yes 00:12:55.965 Atomic Boundary Size (Normal): 0 00:12:55.965 Atomic Boundary Size (PFail): 0 00:12:55.965 Atomic Boundary Offset: 0 00:12:55.965 Maximum Single Source Range Length: 65535 00:12:55.965 Maximum Copy Length: 65535 00:12:55.965 Maximum Source Range Count: 1 00:12:55.965 NGUID/EUI64 Never Reused: No 00:12:55.965 Namespace Write Protected: No 00:12:55.965 Number of LBA Formats: 1 00:12:55.965 Current LBA Format: LBA Format #00 00:12:55.965 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:55.965 00:12:55.965 19:13:33 -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:02.526 Initializing NVMe Controllers 00:13:02.526 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:02.526 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:02.526 Initialization complete. Launching workers. 00:13:02.526 ======================================================== 00:13:02.526 Latency(us) 00:13:02.526 Device Information : IOPS MiB/s Average min max 00:13:02.526 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31397.81 122.65 4075.83 1254.15 9014.93 00:13:02.526 ======================================================== 00:13:02.526 Total : 31397.81 122.65 4075.83 1254.15 9014.93 00:13:02.526 00:13:02.526 19:13:38 -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:06.737 Initializing NVMe Controllers 00:13:06.737 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:06.737 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:06.737 Initialization complete. Launching workers. 00:13:06.737 ======================================================== 00:13:06.737 Latency(us) 00:13:06.737 Device Information : IOPS MiB/s Average min max 00:13:06.737 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33464.60 130.72 3825.80 1229.36 7299.69 00:13:06.737 ======================================================== 00:13:06.737 Total : 33464.60 130.72 3825.80 1229.36 7299.69 00:13:06.737 00:13:06.737 19:13:44 -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:13.307 Initializing NVMe Controllers 00:13:13.307 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:13.307 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:13.307 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:13.307 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:13.307 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:13.307 Initialization complete. Launching workers. 00:13:13.307 Starting thread on core 2 00:13:13.307 Starting thread on core 3 00:13:13.307 Starting thread on core 1 00:13:13.307 19:13:49 -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:15.839 Initializing NVMe Controllers 00:13:15.839 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:15.839 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:15.839 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:15.839 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:15.839 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:15.839 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:15.839 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:13:15.839 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:15.839 Initialization complete. Launching workers. 00:13:15.839 Starting thread on core 1 with urgent priority queue 00:13:15.839 Starting thread on core 2 with urgent priority queue 00:13:15.839 Starting thread on core 3 with urgent priority queue 00:13:15.839 Starting thread on core 0 with urgent priority queue 00:13:15.839 SPDK bdev Controller (SPDK2 ) core 0: 4599.33 IO/s 21.74 secs/100000 ios 00:13:15.839 SPDK bdev Controller (SPDK2 ) core 1: 4770.00 IO/s 20.96 secs/100000 ios 00:13:15.839 SPDK bdev Controller (SPDK2 ) core 2: 3844.33 IO/s 26.01 secs/100000 ios 00:13:15.839 SPDK bdev Controller (SPDK2 ) core 3: 4037.00 IO/s 24.77 secs/100000 ios 00:13:15.839 ======================================================== 00:13:15.839 00:13:15.839 19:13:52 -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:15.839 Initializing NVMe Controllers 00:13:15.839 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:15.839 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:15.839 Namespace ID: 1 size: 0GB 00:13:15.839 Initialization complete. 00:13:15.839 INFO: using host memory buffer for IO 00:13:15.839 Hello world! 00:13:15.839 19:13:53 -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:17.217 Initializing NVMe Controllers 00:13:17.217 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:17.217 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:17.217 Initialization complete. Launching workers. 00:13:17.217 submit (in ns) avg, min, max = 8387.3, 3671.8, 4051732.7 00:13:17.217 complete (in ns) avg, min, max = 35145.0, 1919.1, 7999738.6 00:13:17.217 00:13:17.217 Submit histogram 00:13:17.217 ================ 00:13:17.217 Range in us Cumulative Count 00:13:17.217 3.665 - 3.680: 0.0097% ( 1) 00:13:17.217 3.724 - 3.753: 0.0194% ( 1) 00:13:17.217 3.753 - 3.782: 0.0291% ( 1) 00:13:17.217 3.782 - 3.811: 0.0388% ( 1) 00:13:17.217 3.811 - 3.840: 0.0582% ( 2) 00:13:17.217 3.840 - 3.869: 0.1066% ( 5) 00:13:17.217 3.869 - 3.898: 0.2230% ( 12) 00:13:17.217 3.898 - 3.927: 0.4266% ( 21) 00:13:17.217 3.927 - 3.956: 0.8822% ( 47) 00:13:17.217 3.956 - 3.985: 2.0940% ( 125) 00:13:17.217 3.985 - 4.015: 4.9927% ( 299) 00:13:17.217 4.015 - 4.044: 8.8706% ( 400) 00:13:17.217 4.044 - 4.073: 13.6016% ( 488) 00:13:17.217 4.073 - 4.102: 19.2050% ( 578) 00:13:17.217 4.102 - 4.131: 26.4372% ( 746) 00:13:17.217 4.131 - 4.160: 34.4450% ( 826) 00:13:17.217 4.160 - 4.189: 42.6563% ( 847) 00:13:17.217 4.189 - 4.218: 50.1115% ( 769) 00:13:17.217 4.218 - 4.247: 57.0916% ( 720) 00:13:17.217 4.247 - 4.276: 62.9956% ( 609) 00:13:17.217 4.276 - 4.305: 67.7169% ( 487) 00:13:17.217 4.305 - 4.335: 71.2555% ( 365) 00:13:17.217 4.335 - 4.364: 74.1541% ( 299) 00:13:17.217 4.364 - 4.393: 76.5778% ( 250) 00:13:17.217 4.393 - 4.422: 78.6234% ( 211) 00:13:17.217 4.422 - 4.451: 80.3005% ( 173) 00:13:17.217 4.451 - 4.480: 81.9680% ( 172) 00:13:17.217 4.480 - 4.509: 83.2962% ( 137) 00:13:17.217 4.509 - 4.538: 84.7116% ( 146) 00:13:17.217 4.538 - 4.567: 85.7198% ( 104) 00:13:17.217 4.567 - 4.596: 86.7765% ( 109) 00:13:17.217 4.596 - 4.625: 87.6587% ( 91) 00:13:17.217 4.625 - 4.655: 88.4440% ( 81) 00:13:17.217 4.655 - 4.684: 89.0839% ( 66) 00:13:17.217 4.684 - 4.713: 89.6171% ( 55) 00:13:17.217 4.713 - 4.742: 90.0824% ( 48) 00:13:17.217 4.742 - 4.771: 90.4993% ( 43) 00:13:17.217 4.771 - 4.800: 90.8386% ( 35) 00:13:17.217 4.800 - 4.829: 91.2264% ( 40) 00:13:17.217 4.829 - 4.858: 91.5560% ( 34) 00:13:17.217 4.858 - 4.887: 91.8080% ( 26) 00:13:17.217 4.887 - 4.916: 92.0698% ( 27) 00:13:17.217 4.916 - 4.945: 92.2831% ( 22) 00:13:17.217 4.945 - 4.975: 92.4770% ( 20) 00:13:17.217 4.975 - 5.004: 92.7484% ( 28) 00:13:17.217 5.004 - 5.033: 92.8938% ( 15) 00:13:17.217 5.033 - 5.062: 93.1265% ( 24) 00:13:17.217 5.062 - 5.091: 93.3495% ( 23) 00:13:17.217 5.091 - 5.120: 93.5337% ( 19) 00:13:17.217 5.120 - 5.149: 93.6403% ( 11) 00:13:17.217 5.149 - 5.178: 93.8148% ( 18) 00:13:17.217 5.178 - 5.207: 93.9215% ( 11) 00:13:17.217 5.207 - 5.236: 94.0378% ( 12) 00:13:17.217 5.236 - 5.265: 94.2220% ( 19) 00:13:17.217 5.265 - 5.295: 94.3383% ( 12) 00:13:17.217 5.295 - 5.324: 94.4644% ( 13) 00:13:17.217 5.324 - 5.353: 94.6001% ( 14) 00:13:17.217 5.353 - 5.382: 94.7552% ( 16) 00:13:17.217 5.382 - 5.411: 94.9200% ( 17) 00:13:17.217 5.411 - 5.440: 95.0945% ( 18) 00:13:17.217 5.440 - 5.469: 95.2012% ( 11) 00:13:17.217 5.469 - 5.498: 95.3369% ( 14) 00:13:17.217 5.498 - 5.527: 95.4338% ( 10) 00:13:17.217 5.527 - 5.556: 95.5405% ( 11) 00:13:17.217 5.556 - 5.585: 95.6180% ( 8) 00:13:17.217 5.585 - 5.615: 95.7150% ( 10) 00:13:17.217 5.615 - 5.644: 95.8119% ( 10) 00:13:17.217 5.644 - 5.673: 95.9089% ( 10) 00:13:17.217 5.673 - 5.702: 95.9573% ( 5) 00:13:17.217 5.702 - 5.731: 96.0349% ( 8) 00:13:17.217 5.731 - 5.760: 96.1028% ( 7) 00:13:17.217 5.760 - 5.789: 96.1512% ( 5) 00:13:17.217 5.789 - 5.818: 96.2288% ( 8) 00:13:17.217 5.818 - 5.847: 96.2482% ( 2) 00:13:17.217 5.847 - 5.876: 96.3451% ( 10) 00:13:17.217 5.876 - 5.905: 96.4033% ( 6) 00:13:17.217 5.905 - 5.935: 96.4809% ( 8) 00:13:17.217 5.935 - 5.964: 96.5390% ( 6) 00:13:17.217 5.964 - 5.993: 96.6069% ( 7) 00:13:17.217 5.993 - 6.022: 96.6263% ( 2) 00:13:17.217 6.022 - 6.051: 96.7038% ( 8) 00:13:17.217 6.051 - 6.080: 96.7232% ( 2) 00:13:17.217 6.080 - 6.109: 96.7814% ( 6) 00:13:17.217 6.109 - 6.138: 96.8008% ( 2) 00:13:17.217 6.138 - 6.167: 96.8105% ( 1) 00:13:17.217 6.167 - 6.196: 96.8396% ( 3) 00:13:17.217 6.196 - 6.225: 96.8977% ( 6) 00:13:17.217 6.255 - 6.284: 96.9074% ( 1) 00:13:17.217 6.284 - 6.313: 96.9365% ( 3) 00:13:17.217 6.313 - 6.342: 96.9850% ( 5) 00:13:17.217 6.342 - 6.371: 97.0141% ( 3) 00:13:17.217 6.371 - 6.400: 97.0528% ( 4) 00:13:17.217 6.400 - 6.429: 97.0819% ( 3) 00:13:17.217 6.429 - 6.458: 97.1207% ( 4) 00:13:17.217 6.458 - 6.487: 97.1692% ( 5) 00:13:17.217 6.487 - 6.516: 97.1983% ( 3) 00:13:17.217 6.516 - 6.545: 97.2079% ( 1) 00:13:17.217 6.545 - 6.575: 97.2273% ( 2) 00:13:17.217 6.575 - 6.604: 97.2467% ( 2) 00:13:17.217 6.604 - 6.633: 97.2564% ( 1) 00:13:17.217 6.633 - 6.662: 97.2952% ( 4) 00:13:17.217 6.662 - 6.691: 97.3049% ( 1) 00:13:17.217 6.691 - 6.720: 97.3146% ( 1) 00:13:17.217 6.720 - 6.749: 97.3437% ( 3) 00:13:17.217 6.749 - 6.778: 97.3728% ( 3) 00:13:17.217 6.778 - 6.807: 97.4212% ( 5) 00:13:17.217 6.807 - 6.836: 97.4600% ( 4) 00:13:17.217 6.836 - 6.865: 97.4988% ( 4) 00:13:17.217 6.865 - 6.895: 97.5473% ( 5) 00:13:17.217 6.895 - 6.924: 97.5570% ( 1) 00:13:17.217 6.924 - 6.953: 97.5763% ( 2) 00:13:17.217 6.982 - 7.011: 97.5860% ( 1) 00:13:17.217 7.011 - 7.040: 97.6054% ( 2) 00:13:17.217 7.040 - 7.069: 97.6248% ( 2) 00:13:17.217 7.069 - 7.098: 97.6345% ( 1) 00:13:17.217 7.098 - 7.127: 97.6442% ( 1) 00:13:17.217 7.127 - 7.156: 97.6636% ( 2) 00:13:17.217 7.156 - 7.185: 97.6830% ( 2) 00:13:17.217 7.244 - 7.273: 97.6927% ( 1) 00:13:17.217 7.273 - 7.302: 97.7315% ( 4) 00:13:17.217 7.331 - 7.360: 97.7508% ( 2) 00:13:17.217 7.360 - 7.389: 97.7605% ( 1) 00:13:17.217 7.389 - 7.418: 97.7702% ( 1) 00:13:17.217 7.418 - 7.447: 97.7993% ( 3) 00:13:17.217 7.447 - 7.505: 97.8284% ( 3) 00:13:17.217 7.505 - 7.564: 97.8672% ( 4) 00:13:17.217 7.564 - 7.622: 97.9060% ( 4) 00:13:17.217 7.622 - 7.680: 97.9254% ( 2) 00:13:17.217 7.680 - 7.738: 97.9447% ( 2) 00:13:17.217 7.738 - 7.796: 97.9641% ( 2) 00:13:17.217 7.796 - 7.855: 97.9738% ( 1) 00:13:17.217 7.855 - 7.913: 97.9932% ( 2) 00:13:17.217 7.913 - 7.971: 98.0611% ( 7) 00:13:17.217 7.971 - 8.029: 98.0805% ( 2) 00:13:17.217 8.029 - 8.087: 98.1192% ( 4) 00:13:17.217 8.087 - 8.145: 98.1871% ( 7) 00:13:17.217 8.145 - 8.204: 98.2065% ( 2) 00:13:17.217 8.204 - 8.262: 98.2356% ( 3) 00:13:17.217 8.262 - 8.320: 98.2550% ( 2) 00:13:17.217 8.320 - 8.378: 98.2937% ( 4) 00:13:17.217 8.436 - 8.495: 98.3131% ( 2) 00:13:17.217 8.495 - 8.553: 98.3228% ( 1) 00:13:17.217 8.553 - 8.611: 98.3422% ( 2) 00:13:17.217 8.669 - 8.727: 98.3810% ( 4) 00:13:17.217 8.785 - 8.844: 98.4004% ( 2) 00:13:17.217 8.902 - 8.960: 98.4198% ( 2) 00:13:17.217 8.960 - 9.018: 98.4586% ( 4) 00:13:17.217 9.018 - 9.076: 98.4683% ( 1) 00:13:17.217 9.135 - 9.193: 98.4973% ( 3) 00:13:17.217 9.251 - 9.309: 98.5070% ( 1) 00:13:17.217 9.309 - 9.367: 98.5458% ( 4) 00:13:17.217 9.425 - 9.484: 98.5555% ( 1) 00:13:17.217 9.484 - 9.542: 98.5652% ( 1) 00:13:17.217 9.542 - 9.600: 98.5846% ( 2) 00:13:17.217 9.600 - 9.658: 98.6040% ( 2) 00:13:17.217 9.716 - 9.775: 98.6234% ( 2) 00:13:17.217 9.775 - 9.833: 98.6331% ( 1) 00:13:17.217 9.833 - 9.891: 98.6428% ( 1) 00:13:17.217 9.949 - 10.007: 98.6621% ( 2) 00:13:17.217 10.007 - 10.065: 98.6718% ( 1) 00:13:17.217 10.065 - 10.124: 98.6912% ( 2) 00:13:17.217 10.124 - 10.182: 98.7106% ( 2) 00:13:17.217 10.182 - 10.240: 98.7203% ( 1) 00:13:17.217 10.298 - 10.356: 98.7397% ( 2) 00:13:17.217 10.356 - 10.415: 98.7591% ( 2) 00:13:17.217 10.415 - 10.473: 98.7688% ( 1) 00:13:17.217 10.473 - 10.531: 98.7882% ( 2) 00:13:17.217 10.531 - 10.589: 98.7979% ( 1) 00:13:17.217 10.589 - 10.647: 98.8173% ( 2) 00:13:17.217 10.647 - 10.705: 98.8463% ( 3) 00:13:17.217 10.705 - 10.764: 98.8657% ( 2) 00:13:17.217 10.764 - 10.822: 98.8851% ( 2) 00:13:17.217 10.880 - 10.938: 98.9045% ( 2) 00:13:17.217 10.938 - 10.996: 98.9142% ( 1) 00:13:17.217 11.055 - 11.113: 98.9433% ( 3) 00:13:17.217 11.113 - 11.171: 98.9530% ( 1) 00:13:17.217 11.171 - 11.229: 98.9627% ( 1) 00:13:17.217 11.229 - 11.287: 98.9918% ( 3) 00:13:17.217 11.287 - 11.345: 99.0208% ( 3) 00:13:17.217 11.404 - 11.462: 99.0305% ( 1) 00:13:17.217 11.520 - 11.578: 99.0499% ( 2) 00:13:17.217 11.636 - 11.695: 99.0596% ( 1) 00:13:17.217 11.695 - 11.753: 99.0790% ( 2) 00:13:17.217 11.753 - 11.811: 99.0984% ( 2) 00:13:17.217 11.985 - 12.044: 99.1081% ( 1) 00:13:17.217 12.044 - 12.102: 99.1178% ( 1) 00:13:17.217 12.102 - 12.160: 99.1275% ( 1) 00:13:17.217 12.160 - 12.218: 99.1372% ( 1) 00:13:17.217 12.335 - 12.393: 99.1469% ( 1) 00:13:17.217 12.393 - 12.451: 99.1566% ( 1) 00:13:17.217 12.509 - 12.567: 99.1760% ( 2) 00:13:17.217 12.625 - 12.684: 99.1857% ( 1) 00:13:17.217 12.858 - 12.916: 99.1953% ( 1) 00:13:17.217 13.207 - 13.265: 99.2147% ( 2) 00:13:17.217 13.265 - 13.324: 99.2244% ( 1) 00:13:17.217 13.440 - 13.498: 99.2341% ( 1) 00:13:17.217 13.673 - 13.731: 99.2438% ( 1) 00:13:17.217 13.789 - 13.847: 99.2535% ( 1) 00:13:17.217 13.905 - 13.964: 99.2632% ( 1) 00:13:17.217 14.255 - 14.313: 99.2729% ( 1) 00:13:17.218 14.487 - 14.545: 99.2826% ( 1) 00:13:17.218 14.662 - 14.720: 99.2923% ( 1) 00:13:17.218 15.011 - 15.127: 99.3020% ( 1) 00:13:17.218 15.244 - 15.360: 99.3117% ( 1) 00:13:17.218 15.593 - 15.709: 99.3311% ( 2) 00:13:17.218 15.709 - 15.825: 99.3408% ( 1) 00:13:17.218 15.825 - 15.942: 99.3505% ( 1) 00:13:17.218 16.058 - 16.175: 99.3602% ( 1) 00:13:17.218 17.338 - 17.455: 99.3698% ( 1) 00:13:17.218 17.804 - 17.920: 99.3795% ( 1) 00:13:17.218 18.618 - 18.735: 99.3892% ( 1) 00:13:17.218 18.851 - 18.967: 99.4086% ( 2) 00:13:17.218 18.967 - 19.084: 99.4280% ( 2) 00:13:17.218 19.084 - 19.200: 99.4668% ( 4) 00:13:17.218 19.200 - 19.316: 99.5056% ( 4) 00:13:17.218 19.316 - 19.433: 99.5347% ( 3) 00:13:17.218 19.433 - 19.549: 99.5928% ( 6) 00:13:17.218 19.549 - 19.665: 99.6316% ( 4) 00:13:17.218 19.782 - 19.898: 99.6510% ( 2) 00:13:17.218 20.015 - 20.131: 99.6704% ( 2) 00:13:17.218 20.247 - 20.364: 99.6898% ( 2) 00:13:17.218 20.364 - 20.480: 99.7092% ( 2) 00:13:17.218 20.480 - 20.596: 99.7382% ( 3) 00:13:17.218 20.596 - 20.713: 99.7673% ( 3) 00:13:17.218 20.713 - 20.829: 99.7964% ( 3) 00:13:17.218 20.829 - 20.945: 99.8061% ( 1) 00:13:17.218 20.945 - 21.062: 99.8158% ( 1) 00:13:17.218 21.062 - 21.178: 99.8352% ( 2) 00:13:17.218 21.178 - 21.295: 99.8449% ( 1) 00:13:17.218 21.411 - 21.527: 99.8546% ( 1) 00:13:17.218 21.760 - 21.876: 99.8643% ( 1) 00:13:17.218 25.716 - 25.833: 99.8740% ( 1) 00:13:17.218 27.695 - 27.811: 99.8837% ( 1) 00:13:17.218 33.513 - 33.745: 99.8934% ( 1) 00:13:17.218 38.167 - 38.400: 99.9031% ( 1) 00:13:17.218 3961.949 - 3991.738: 99.9224% ( 2) 00:13:17.218 3991.738 - 4021.527: 99.9612% ( 4) 00:13:17.218 4021.527 - 4051.316: 99.9903% ( 3) 00:13:17.218 4051.316 - 4081.105: 100.0000% ( 1) 00:13:17.218 00:13:17.218 Complete histogram 00:13:17.218 ================== 00:13:17.218 Range in us Cumulative Count 00:13:17.218 1.905 - 1.920: 0.0097% ( 1) 00:13:17.218 1.920 - 1.935: 0.0485% ( 4) 00:13:17.218 1.964 - 1.978: 0.0582% ( 1) 00:13:17.218 1.978 - 1.993: 0.0679% ( 1) 00:13:17.218 1.993 - 2.007: 0.0776% ( 1) 00:13:17.218 2.036 - 2.051: 0.7756% ( 72) 00:13:17.218 2.051 - 2.065: 2.1716% ( 144) 00:13:17.218 2.065 - 2.080: 2.2588% ( 9) 00:13:17.218 2.095 - 2.109: 2.3752% ( 12) 00:13:17.218 2.109 - 2.124: 23.3834% ( 2167) 00:13:17.218 2.124 - 2.138: 43.1604% ( 2040) 00:13:17.218 2.138 - 2.153: 43.7518% ( 61) 00:13:17.218 2.153 - 2.167: 43.9845% ( 24) 00:13:17.218 2.167 - 2.182: 45.2254% ( 128) 00:13:17.218 2.182 - 2.196: 61.5608% ( 1685) 00:13:17.218 2.196 - 2.211: 88.4634% ( 2775) 00:13:17.218 2.211 - 2.225: 90.8677% ( 248) 00:13:17.218 2.225 - 2.240: 91.3330% ( 48) 00:13:17.218 2.240 - 2.255: 92.4673% ( 117) 00:13:17.218 2.255 - 2.269: 94.4353% ( 203) 00:13:17.218 2.269 - 2.284: 95.4920% ( 109) 00:13:17.218 2.284 - 2.298: 96.0058% ( 53) 00:13:17.218 2.298 - 2.313: 96.5002% ( 51) 00:13:17.218 2.313 - 2.327: 96.7038% ( 21) 00:13:17.218 2.327 - 2.342: 96.9462% ( 25) 00:13:17.218 2.342 - 2.356: 97.1110% ( 17) 00:13:17.218 2.356 - 2.371: 97.2467% ( 14) 00:13:17.218 2.371 - 2.385: 97.3921% ( 15) 00:13:17.218 2.385 - 2.400: 97.5376% ( 15) 00:13:17.218 2.400 - 2.415: 97.6151% ( 8) 00:13:17.218 2.415 - 2.429: 97.7218% ( 11) 00:13:17.218 2.429 - 2.444: 97.7993% ( 8) 00:13:17.218 2.444 - 2.458: 97.8575% ( 6) 00:13:17.218 2.458 - 2.473: 97.8963% ( 4) 00:13:17.218 2.473 - 2.487: 97.9254% ( 3) 00:13:17.218 2.487 - 2.502: 97.9641% ( 4) 00:13:17.218 2.502 - 2.516: 98.0320% ( 7) 00:13:17.218 2.516 - 2.531: 98.0611% ( 3) 00:13:17.218 2.531 - 2.545: 98.0708% ( 1) 00:13:17.218 2.545 - 2.560: 98.0805% ( 1) 00:13:17.218 2.560 - 2.575: 98.0999% ( 2) 00:13:17.218 2.575 - 2.589: 98.1095% ( 1) 00:13:17.218 2.618 - 2.633: 98.1192% ( 1) 00:13:17.218 2.633 - 2.647: 98.1483% ( 3) 00:13:17.218 2.662 - 2.676: 98.1580% ( 1) 00:13:17.218 2.676 - 2.691: 98.1677% ( 1) 00:13:17.218 2.691 - 2.705: 98.1774% ( 1) 00:13:17.218 2.705 - 2.720: 98.1871% ( 1) 00:13:17.218 2.807 - 2.822: 98.1968% ( 1) 00:13:17.218 2.938 - 2.953: 98.2162% ( 2) 00:13:17.218 3.040 - 3.055: 98.2259% ( 1) 00:13:17.218 3.084 - 3.098: 98.2356% ( 1) 00:13:17.218 3.331 - 3.345: 98.2453% ( 1) 00:13:17.218 3.578 - 3.593: 98.2550% ( 1) 00:13:17.218 3.695 - 3.709: 98.2647% ( 1) 00:13:17.218 3.956 - 3.985: 98.2744% ( 1) 00:13:17.218 3.985 - 4.015: 98.2937% ( 2) 00:13:17.218 4.015 - 4.044: 98.3034% ( 1) 00:13:17.218 4.189 - 4.218: 98.3131% ( 1) 00:13:17.218 4.247 - 4.276: 98.3228% ( 1) 00:13:17.218 4.276 - 4.305: 98.3325% ( 1) 00:13:17.218 4.451 - 4.480: 98.3422% ( 1) 00:13:17.218 4.509 - 4.538: 98.3519% ( 1) 00:13:17.218 4.567 - 4.596: 98.3616% ( 1) 00:13:17.218 4.713 - 4.742: 98.3713% ( 1) 00:13:17.218 4.771 - 4.800: 98.3810% ( 1) 00:13:17.218 4.800 - 4.829: 98.3907% ( 1) 00:13:17.218 4.829 - 4.858: 98.4004% ( 1) 00:13:17.218 4.858 - 4.887: 98.4198% ( 2) 00:13:17.218 5.062 - 5.091: 98.4295% ( 1) 00:13:17.218 5.207 - 5.236: 98.4392% ( 1) 00:13:17.218 5.295 - 5.324: 98.4489% ( 1) 00:13:17.218 6.022 - 6.051: 98.4586% ( 1) 00:13:17.218 6.109 - 6.138: 98.4779% ( 2) 00:13:17.218 6.575 - 6.604: 98.4876% ( 1) 00:13:17.218 6.749 - 6.778: 98.4973% ( 1) 00:13:17.218 6.865 - 6.895: 98.5070% ( 1) 00:13:17.218 7.331 - 7.360: 98.5167% ( 1) 00:13:17.218 7.796 - 7.855: 98.5264% ( 1) 00:13:17.218 8.145 - 8.204: 98.5458% ( 2) 00:13:17.218 8.320 - 8.378: 98.5555% ( 1) 00:13:17.218 8.378 - 8.436: 98.5652% ( 1) 00:13:17.218 8.436 - 8.495: 98.5749% ( 1) 00:13:17.218 8.960 - 9.018: 98.6040% ( 3) 00:13:17.218 9.018 - 9.076: 98.6234% ( 2) 00:13:17.218 9.135 - 9.193: 98.6428% ( 2) 00:13:17.218 9.251 - 9.309: 98.6621% ( 2) 00:13:17.218 9.309 - 9.367: 98.6718% ( 1) 00:13:17.218 9.367 - 9.425: 98.6912% ( 2) 00:13:17.218 9.425 - 9.484: 98.7009% ( 1) 00:13:17.218 9.484 - 9.542: 98.7106% ( 1) 00:13:17.218 9.658 - 9.716: 98.7203% ( 1) 00:13:17.218 9.716 - 9.775: 98.7397% ( 2) 00:13:17.218 9.775 - 9.833: 98.7494% ( 1) 00:13:17.218 9.833 - 9.891: 98.7591% ( 1) 00:13:17.218 9.949 - 10.007: 98.7785% ( 2) 00:13:17.218 10.124 - 10.182: 98.7882% ( 1) 00:13:17.218 10.415 - 10.473: 98.7979% ( 1) 00:13:17.218 10.589 - 10.647: 98.8076% ( 1) 00:13:17.476 10.822 - 10.880: 98.8173% ( 1) 00:13:17.476 10.938 - 10.996: 98.8270% ( 1) 00:13:17.476 11.229 - 11.287: 98.8366% ( 1) 00:13:17.476 11.404 - 11.462: 98.8463% ( 1) 00:13:17.476 11.695 - 11.753: 98.8560% ( 1) 00:13:17.477 11.811 - 11.869: 98.8657% ( 1) 00:13:17.477 12.160 - 12.218: 98.8754% ( 1) 00:13:17.477 14.022 - 14.080: 98.8851% ( 1) 00:13:17.477 15.709 - 15.825: 98.8948% ( 1) 00:13:17.477 16.989 - 17.105: 98.9142% ( 2) 00:13:17.477 17.105 - 17.222: 98.9433% ( 3) 00:13:17.477 17.222 - 17.338: 98.9530% ( 1) 00:13:17.477 17.455 - 17.571: 98.9724% ( 2) 00:13:17.477 17.571 - 17.687: 98.9918% ( 2) 00:13:17.477 17.687 - 17.804: 99.0015% ( 1) 00:13:17.477 17.920 - 18.036: 99.0111% ( 1) 00:13:17.477 18.153 - 18.269: 99.0305% ( 2) 00:13:17.477 18.269 - 18.385: 99.0499% ( 2) 00:13:17.477 18.385 - 18.502: 99.0790% ( 3) 00:13:17.477 18.502 - 18.618: 99.0887% ( 1) 00:13:17.477 19.084 - 19.200: 99.0984% ( 1) 00:13:17.477 19.200 - 19.316: 99.1081% ( 1) 00:13:17.477 19.316 - 19.433: 99.1372% ( 3) 00:13:17.477 20.829 - 20.945: 99.1469% ( 1) 00:13:17.477 21.178 - 21.295: 99.1566% ( 1) 00:13:17.477 23.855 - 23.971: 99.1663% ( 1) 00:13:17.477 31.418 - 31.651: 99.1760% ( 1) 00:13:17.477 1035.171 - 1042.618: 99.1857% ( 1) 00:13:17.477 1042.618 - 1050.065: 99.1953% ( 1) 00:13:17.477 3038.487 - 3053.382: 99.2147% ( 2) 00:13:17.477 3053.382 - 3068.276: 99.2341% ( 2) 00:13:17.477 3961.949 - 3991.738: 99.3505% ( 12) 00:13:17.477 3991.738 - 4021.527: 99.7576% ( 42) 00:13:17.477 4021.527 - 4051.316: 99.9321% ( 18) 00:13:17.477 4051.316 - 4081.105: 99.9709% ( 4) 00:13:17.477 4081.105 - 4110.895: 99.9806% ( 1) 00:13:17.477 7983.476 - 8043.055: 100.0000% ( 2) 00:13:17.477 00:13:17.477 19:13:54 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:17.477 19:13:54 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:17.477 19:13:54 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:17.477 19:13:54 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:17.477 19:13:54 -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:17.735 [ 00:13:17.735 { 00:13:17.735 "allow_any_host": true, 00:13:17.735 "hosts": [], 00:13:17.735 "listen_addresses": [], 00:13:17.735 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:17.735 "subtype": "Discovery" 00:13:17.735 }, 00:13:17.735 { 00:13:17.735 "allow_any_host": true, 00:13:17.735 "hosts": [], 00:13:17.735 "listen_addresses": [ 00:13:17.735 { 00:13:17.735 "adrfam": "IPv4", 00:13:17.735 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:17.735 "transport": "VFIOUSER", 00:13:17.735 "trsvcid": "0", 00:13:17.735 "trtype": "VFIOUSER" 00:13:17.735 } 00:13:17.735 ], 00:13:17.735 "max_cntlid": 65519, 00:13:17.735 "max_namespaces": 32, 00:13:17.735 "min_cntlid": 1, 00:13:17.735 "model_number": "SPDK bdev Controller", 00:13:17.735 "namespaces": [ 00:13:17.735 { 00:13:17.735 "bdev_name": "Malloc1", 00:13:17.735 "name": "Malloc1", 00:13:17.735 "nguid": "83B7C26DF52E4C87AC65595B0D2E5B33", 00:13:17.735 "nsid": 1, 00:13:17.735 "uuid": "83b7c26d-f52e-4c87-ac65-595b0d2e5b33" 00:13:17.735 }, 00:13:17.735 { 00:13:17.735 "bdev_name": "Malloc3", 00:13:17.735 "name": "Malloc3", 00:13:17.735 "nguid": "317767F53998452CA2532A5AA0460909", 00:13:17.735 "nsid": 2, 00:13:17.735 "uuid": "317767f5-3998-452c-a253-2a5aa0460909" 00:13:17.735 } 00:13:17.735 ], 00:13:17.735 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:17.735 "serial_number": "SPDK1", 00:13:17.735 "subtype": "NVMe" 00:13:17.735 }, 00:13:17.735 { 00:13:17.735 "allow_any_host": true, 00:13:17.735 "hosts": [], 00:13:17.735 "listen_addresses": [ 00:13:17.735 { 00:13:17.735 "adrfam": "IPv4", 00:13:17.735 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:17.735 "transport": "VFIOUSER", 00:13:17.735 "trsvcid": "0", 00:13:17.735 "trtype": "VFIOUSER" 00:13:17.735 } 00:13:17.735 ], 00:13:17.735 "max_cntlid": 65519, 00:13:17.735 "max_namespaces": 32, 00:13:17.735 "min_cntlid": 1, 00:13:17.735 "model_number": "SPDK bdev Controller", 00:13:17.735 "namespaces": [ 00:13:17.735 { 00:13:17.735 "bdev_name": "Malloc2", 00:13:17.735 "name": "Malloc2", 00:13:17.735 "nguid": "8A891366B6604FE28D842A64F92A5BBD", 00:13:17.735 "nsid": 1, 00:13:17.735 "uuid": "8a891366-b660-4fe2-8d84-2a64f92a5bbd" 00:13:17.735 } 00:13:17.735 ], 00:13:17.735 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:17.735 "serial_number": "SPDK2", 00:13:17.735 "subtype": "NVMe" 00:13:17.735 } 00:13:17.735 ] 00:13:17.735 19:13:54 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:17.735 19:13:54 -- target/nvmf_vfio_user.sh@34 -- # aerpid=70170 00:13:17.735 19:13:54 -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:17.735 19:13:54 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:17.735 19:13:54 -- common/autotest_common.sh@1242 -- # local i=0 00:13:17.735 19:13:54 -- common/autotest_common.sh@1243 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:17.735 19:13:54 -- common/autotest_common.sh@1244 -- # '[' 0 -lt 200 ']' 00:13:17.735 19:13:54 -- common/autotest_common.sh@1245 -- # i=1 00:13:17.735 19:13:54 -- common/autotest_common.sh@1246 -- # sleep 0.1 00:13:17.735 19:13:55 -- common/autotest_common.sh@1243 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:17.735 19:13:55 -- common/autotest_common.sh@1244 -- # '[' 1 -lt 200 ']' 00:13:17.735 19:13:55 -- common/autotest_common.sh@1245 -- # i=2 00:13:17.735 19:13:55 -- common/autotest_common.sh@1246 -- # sleep 0.1 00:13:17.735 19:13:55 -- common/autotest_common.sh@1243 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:17.735 19:13:55 -- common/autotest_common.sh@1244 -- # '[' 2 -lt 200 ']' 00:13:17.735 19:13:55 -- common/autotest_common.sh@1245 -- # i=3 00:13:17.735 19:13:55 -- common/autotest_common.sh@1246 -- # sleep 0.1 00:13:17.994 19:13:55 -- common/autotest_common.sh@1243 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:17.994 19:13:55 -- common/autotest_common.sh@1249 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:17.994 19:13:55 -- common/autotest_common.sh@1253 -- # return 0 00:13:17.994 19:13:55 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:17.994 19:13:55 -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:18.253 Malloc4 00:13:18.253 19:13:55 -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:18.511 19:13:55 -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:18.511 Asynchronous Event Request test 00:13:18.511 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:18.511 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:18.511 Registering asynchronous event callbacks... 00:13:18.511 Starting namespace attribute notice tests for all controllers... 00:13:18.511 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:18.511 aer_cb - Changed Namespace 00:13:18.511 Cleaning up... 00:13:18.770 [ 00:13:18.770 { 00:13:18.770 "allow_any_host": true, 00:13:18.770 "hosts": [], 00:13:18.770 "listen_addresses": [], 00:13:18.770 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:18.770 "subtype": "Discovery" 00:13:18.770 }, 00:13:18.770 { 00:13:18.770 "allow_any_host": true, 00:13:18.770 "hosts": [], 00:13:18.770 "listen_addresses": [ 00:13:18.770 { 00:13:18.770 "adrfam": "IPv4", 00:13:18.770 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:18.770 "transport": "VFIOUSER", 00:13:18.770 "trsvcid": "0", 00:13:18.770 "trtype": "VFIOUSER" 00:13:18.770 } 00:13:18.770 ], 00:13:18.770 "max_cntlid": 65519, 00:13:18.770 "max_namespaces": 32, 00:13:18.770 "min_cntlid": 1, 00:13:18.770 "model_number": "SPDK bdev Controller", 00:13:18.770 "namespaces": [ 00:13:18.770 { 00:13:18.770 "bdev_name": "Malloc1", 00:13:18.770 "name": "Malloc1", 00:13:18.770 "nguid": "83B7C26DF52E4C87AC65595B0D2E5B33", 00:13:18.770 "nsid": 1, 00:13:18.770 "uuid": "83b7c26d-f52e-4c87-ac65-595b0d2e5b33" 00:13:18.770 }, 00:13:18.770 { 00:13:18.770 "bdev_name": "Malloc3", 00:13:18.770 "name": "Malloc3", 00:13:18.770 "nguid": "317767F53998452CA2532A5AA0460909", 00:13:18.770 "nsid": 2, 00:13:18.770 "uuid": "317767f5-3998-452c-a253-2a5aa0460909" 00:13:18.770 } 00:13:18.770 ], 00:13:18.770 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:18.770 "serial_number": "SPDK1", 00:13:18.770 "subtype": "NVMe" 00:13:18.770 }, 00:13:18.770 { 00:13:18.770 "allow_any_host": true, 00:13:18.770 "hosts": [], 00:13:18.770 "listen_addresses": [ 00:13:18.770 { 00:13:18.770 "adrfam": "IPv4", 00:13:18.770 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:18.770 "transport": "VFIOUSER", 00:13:18.770 "trsvcid": "0", 00:13:18.770 "trtype": "VFIOUSER" 00:13:18.770 } 00:13:18.770 ], 00:13:18.770 "max_cntlid": 65519, 00:13:18.770 "max_namespaces": 32, 00:13:18.770 "min_cntlid": 1, 00:13:18.770 "model_number": "SPDK bdev Controller", 00:13:18.770 "namespaces": [ 00:13:18.770 { 00:13:18.770 "bdev_name": "Malloc2", 00:13:18.770 "name": "Malloc2", 00:13:18.770 "nguid": "8A891366B6604FE28D842A64F92A5BBD", 00:13:18.770 "nsid": 1, 00:13:18.770 "uuid": "8a891366-b660-4fe2-8d84-2a64f92a5bbd" 00:13:18.770 }, 00:13:18.770 { 00:13:18.770 "bdev_name": "Malloc4", 00:13:18.770 "name": "Malloc4", 00:13:18.770 "nguid": "89EC4A4F16A04A70B436B9DA3FA3BAA4", 00:13:18.770 "nsid": 2, 00:13:18.770 "uuid": "89ec4a4f-16a0-4a70-b436-b9da3fa3baa4" 00:13:18.770 } 00:13:18.770 ], 00:13:18.770 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:18.770 "serial_number": "SPDK2", 00:13:18.770 "subtype": "NVMe" 00:13:18.770 } 00:13:18.770 ] 00:13:18.770 19:13:56 -- target/nvmf_vfio_user.sh@44 -- # wait 70170 00:13:18.770 19:13:56 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:18.770 19:13:56 -- target/nvmf_vfio_user.sh@95 -- # killprocess 69483 00:13:18.770 19:13:56 -- common/autotest_common.sh@924 -- # '[' -z 69483 ']' 00:13:18.770 19:13:56 -- common/autotest_common.sh@928 -- # kill -0 69483 00:13:18.770 19:13:56 -- common/autotest_common.sh@929 -- # uname 00:13:18.770 19:13:56 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:13:18.770 19:13:56 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 69483 00:13:18.770 19:13:56 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:13:18.770 19:13:56 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:13:18.770 19:13:56 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 69483' 00:13:18.770 killing process with pid 69483 00:13:18.770 19:13:56 -- common/autotest_common.sh@943 -- # kill 69483 00:13:18.770 [2024-02-14 19:13:56.051712] app.c: 881:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:13:18.770 19:13:56 -- common/autotest_common.sh@948 -- # wait 69483 00:13:19.338 19:13:56 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:19.338 19:13:56 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:19.338 19:13:56 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:19.338 19:13:56 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:19.338 19:13:56 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:19.338 19:13:56 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=70219 00:13:19.338 Process pid: 70219 00:13:19.338 19:13:56 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 70219' 00:13:19.338 19:13:56 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:19.338 19:13:56 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 70219 00:13:19.338 19:13:56 -- common/autotest_common.sh@817 -- # '[' -z 70219 ']' 00:13:19.338 19:13:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.338 19:13:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:19.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.338 19:13:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.338 19:13:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:19.338 19:13:56 -- common/autotest_common.sh@10 -- # set +x 00:13:19.338 19:13:56 -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:19.338 [2024-02-14 19:13:56.707082] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:19.338 [2024-02-14 19:13:56.708463] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:13:19.338 [2024-02-14 19:13:56.708570] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.668 [2024-02-14 19:13:56.842278] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:19.668 [2024-02-14 19:13:57.001719] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:19.668 [2024-02-14 19:13:57.002165] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:19.668 [2024-02-14 19:13:57.002274] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:19.668 [2024-02-14 19:13:57.002344] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:19.668 [2024-02-14 19:13:57.002552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.668 [2024-02-14 19:13:57.003118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:19.668 [2024-02-14 19:13:57.003279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:19.668 [2024-02-14 19:13:57.003283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.951 [2024-02-14 19:13:57.134093] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:13:19.951 [2024-02-14 19:13:57.140779] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:13:19.951 [2024-02-14 19:13:57.140938] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:13:19.951 [2024-02-14 19:13:57.141848] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:19.951 [2024-02-14 19:13:57.141980] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:13:20.519 19:13:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:20.519 19:13:57 -- common/autotest_common.sh@850 -- # return 0 00:13:20.519 19:13:57 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:21.453 19:13:58 -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:21.711 19:13:58 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:21.711 19:13:58 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:21.711 19:13:58 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:21.711 19:13:58 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:21.711 19:13:58 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:21.969 Malloc1 00:13:21.970 19:13:59 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:22.228 19:13:59 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:22.485 19:13:59 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:22.745 19:14:00 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:22.745 19:14:00 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:22.745 19:14:00 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:23.003 Malloc2 00:13:23.003 19:14:00 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:23.261 19:14:00 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:23.518 19:14:00 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:23.776 19:14:01 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:23.776 19:14:01 -- target/nvmf_vfio_user.sh@95 -- # killprocess 70219 00:13:23.776 19:14:01 -- common/autotest_common.sh@924 -- # '[' -z 70219 ']' 00:13:23.776 19:14:01 -- common/autotest_common.sh@928 -- # kill -0 70219 00:13:23.776 19:14:01 -- common/autotest_common.sh@929 -- # uname 00:13:23.776 19:14:01 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:13:23.776 19:14:01 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 70219 00:13:23.776 19:14:01 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:13:23.776 19:14:01 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:13:23.776 killing process with pid 70219 00:13:23.776 19:14:01 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 70219' 00:13:23.776 19:14:01 -- common/autotest_common.sh@943 -- # kill 70219 00:13:23.776 19:14:01 -- common/autotest_common.sh@948 -- # wait 70219 00:13:24.344 19:14:01 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:24.344 19:14:01 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:24.344 00:13:24.344 real 0m56.136s 00:13:24.344 user 3m39.984s 00:13:24.344 sys 0m3.986s 00:13:24.344 19:14:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:24.344 19:14:01 -- common/autotest_common.sh@10 -- # set +x 00:13:24.344 ************************************ 00:13:24.344 END TEST nvmf_vfio_user 00:13:24.344 ************************************ 00:13:24.344 19:14:01 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user_nvme_compliance /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:24.344 19:14:01 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:13:24.344 19:14:01 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:24.344 19:14:01 -- common/autotest_common.sh@10 -- # set +x 00:13:24.344 ************************************ 00:13:24.344 START TEST nvmf_vfio_user_nvme_compliance 00:13:24.344 ************************************ 00:13:24.344 19:14:01 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:24.344 * Looking for test storage... 00:13:24.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/compliance 00:13:24.344 19:14:01 -- compliance/compliance.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:24.344 19:14:01 -- nvmf/common.sh@7 -- # uname -s 00:13:24.344 19:14:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.344 19:14:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.344 19:14:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.344 19:14:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.344 19:14:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.344 19:14:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.344 19:14:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.344 19:14:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.344 19:14:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.344 19:14:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.344 19:14:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:13:24.344 19:14:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:13:24.344 19:14:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.344 19:14:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.344 19:14:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:24.344 19:14:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:24.344 19:14:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.344 19:14:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.344 19:14:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.344 19:14:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.344 19:14:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.344 19:14:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.344 19:14:01 -- paths/export.sh@5 -- # export PATH 00:13:24.344 19:14:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.344 19:14:01 -- nvmf/common.sh@46 -- # : 0 00:13:24.344 19:14:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:24.344 19:14:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:24.344 19:14:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:24.344 19:14:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.344 19:14:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.344 19:14:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:24.344 19:14:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:24.344 19:14:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:24.344 19:14:01 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:24.344 19:14:01 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:24.344 19:14:01 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:24.344 19:14:01 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:24.344 19:14:01 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:24.344 19:14:01 -- compliance/compliance.sh@20 -- # nvmfpid=70413 00:13:24.344 Process pid: 70413 00:13:24.344 19:14:01 -- compliance/compliance.sh@21 -- # echo 'Process pid: 70413' 00:13:24.344 19:14:01 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:24.344 19:14:01 -- compliance/compliance.sh@19 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:24.344 19:14:01 -- compliance/compliance.sh@24 -- # waitforlisten 70413 00:13:24.344 19:14:01 -- common/autotest_common.sh@817 -- # '[' -z 70413 ']' 00:13:24.344 19:14:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.344 19:14:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:24.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.344 19:14:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.344 19:14:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:24.344 19:14:01 -- common/autotest_common.sh@10 -- # set +x 00:13:24.602 [2024-02-14 19:14:01.812299] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:13:24.603 [2024-02-14 19:14:01.812420] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.603 [2024-02-14 19:14:01.947146] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:24.861 [2024-02-14 19:14:02.092987] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:24.861 [2024-02-14 19:14:02.093196] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.861 [2024-02-14 19:14:02.093210] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.861 [2024-02-14 19:14:02.093219] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.861 [2024-02-14 19:14:02.093468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.861 [2024-02-14 19:14:02.094149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:24.861 [2024-02-14 19:14:02.094195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.797 19:14:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:25.797 19:14:02 -- common/autotest_common.sh@850 -- # return 0 00:13:25.797 19:14:02 -- compliance/compliance.sh@26 -- # sleep 1 00:13:26.732 19:14:03 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:26.732 19:14:03 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:26.732 19:14:03 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:26.732 19:14:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:26.732 19:14:03 -- common/autotest_common.sh@10 -- # set +x 00:13:26.732 19:14:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:26.732 19:14:03 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:26.732 19:14:03 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:26.732 19:14:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:26.732 19:14:03 -- common/autotest_common.sh@10 -- # set +x 00:13:26.732 malloc0 00:13:26.732 19:14:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:26.732 19:14:03 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:26.732 19:14:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:26.733 19:14:03 -- common/autotest_common.sh@10 -- # set +x 00:13:26.733 19:14:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:26.733 19:14:03 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:26.733 19:14:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:26.733 19:14:03 -- common/autotest_common.sh@10 -- # set +x 00:13:26.733 19:14:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:26.733 19:14:03 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:26.733 19:14:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:26.733 19:14:03 -- common/autotest_common.sh@10 -- # set +x 00:13:26.733 19:14:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:26.733 19:14:03 -- compliance/compliance.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:26.733 00:13:26.733 00:13:26.733 CUnit - A unit testing framework for C - Version 2.1-3 00:13:26.733 http://cunit.sourceforge.net/ 00:13:26.733 00:13:26.733 00:13:26.733 Suite: nvme_compliance 00:13:26.991 Test: admin_identify_ctrlr_verify_dptr ...[2024-02-14 19:14:04.182833] vfio_user.c: 790:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:26.991 [2024-02-14 19:14:04.182929] vfio_user.c:5485:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:26.991 [2024-02-14 19:14:04.182941] vfio_user.c:5577:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:26.991 passed 00:13:26.991 Test: admin_identify_ctrlr_verify_fused ...passed 00:13:27.250 Test: admin_identify_ns ...[2024-02-14 19:14:04.430551] ctrlr.c:2632:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:27.250 [2024-02-14 19:14:04.438546] ctrlr.c:2632:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:27.250 passed 00:13:27.250 Test: admin_get_features_mandatory_features ...passed 00:13:27.508 Test: admin_get_features_optional_features ...passed 00:13:27.508 Test: admin_set_features_number_of_queues ...passed 00:13:27.766 Test: admin_get_log_page_mandatory_logs ...passed 00:13:27.766 Test: admin_get_log_page_with_lpo ...[2024-02-14 19:14:05.089531] ctrlr.c:2580:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:27.767 passed 00:13:28.035 Test: fabric_property_get ...passed 00:13:28.035 Test: admin_delete_io_sq_use_admin_qid ...[2024-02-14 19:14:05.287853] vfio_user.c:2301:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:28.035 passed 00:13:28.317 Test: admin_delete_io_sq_delete_sq_twice ...[2024-02-14 19:14:05.465545] vfio_user.c:2301:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:28.317 [2024-02-14 19:14:05.481518] vfio_user.c:2301:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:28.317 passed 00:13:28.317 Test: admin_delete_io_cq_use_admin_qid ...[2024-02-14 19:14:05.576876] vfio_user.c:2301:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:28.317 passed 00:13:28.575 Test: admin_delete_io_cq_delete_cq_first ...[2024-02-14 19:14:05.747578] vfio_user.c:2311:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:28.575 [2024-02-14 19:14:05.771517] vfio_user.c:2301:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:28.575 passed 00:13:28.575 Test: admin_create_io_cq_verify_iv_pc ...[2024-02-14 19:14:05.865889] vfio_user.c:2151:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:28.575 [2024-02-14 19:14:05.865980] vfio_user.c:2145:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:28.575 passed 00:13:28.833 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-02-14 19:14:06.053527] vfio_user.c:2232:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:28.833 [2024-02-14 19:14:06.061515] vfio_user.c:2232:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:28.833 [2024-02-14 19:14:06.069513] vfio_user.c:2032:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:28.833 [2024-02-14 19:14:06.077510] vfio_user.c:2032:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:28.833 passed 00:13:28.833 Test: admin_create_io_sq_verify_pc ...[2024-02-14 19:14:06.210534] vfio_user.c:2045:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:29.092 passed 00:13:30.027 Test: admin_create_io_qp_max_qps ...[2024-02-14 19:14:07.409520] nvme_ctrlr.c:5306:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:30.592 passed 00:13:30.850 Test: admin_create_io_sq_shared_cq ...[2024-02-14 19:14:08.019540] vfio_user.c:2311:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:30.850 passed 00:13:30.850 00:13:30.850 Run Summary: Type Total Ran Passed Failed Inactive 00:13:30.850 suites 1 1 n/a 0 0 00:13:30.850 tests 18 18 18 0 0 00:13:30.850 asserts 360 360 360 0 n/a 00:13:30.850 00:13:30.850 Elapsed time = 1.613 seconds 00:13:30.850 19:14:08 -- compliance/compliance.sh@42 -- # killprocess 70413 00:13:30.850 19:14:08 -- common/autotest_common.sh@924 -- # '[' -z 70413 ']' 00:13:30.850 19:14:08 -- common/autotest_common.sh@928 -- # kill -0 70413 00:13:30.850 19:14:08 -- common/autotest_common.sh@929 -- # uname 00:13:30.850 19:14:08 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:13:30.850 19:14:08 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 70413 00:13:30.850 19:14:08 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:13:30.850 19:14:08 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:13:30.850 19:14:08 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 70413' 00:13:30.850 killing process with pid 70413 00:13:30.850 19:14:08 -- common/autotest_common.sh@943 -- # kill 70413 00:13:30.850 19:14:08 -- common/autotest_common.sh@948 -- # wait 70413 00:13:31.109 19:14:08 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:31.109 19:14:08 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:31.109 00:13:31.109 real 0m6.784s 00:13:31.109 user 0m18.975s 00:13:31.109 sys 0m0.566s 00:13:31.109 19:14:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:31.109 ************************************ 00:13:31.109 END TEST nvmf_vfio_user_nvme_compliance 00:13:31.109 ************************************ 00:13:31.109 19:14:08 -- common/autotest_common.sh@10 -- # set +x 00:13:31.109 19:14:08 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:31.109 19:14:08 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:13:31.109 19:14:08 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:31.109 19:14:08 -- common/autotest_common.sh@10 -- # set +x 00:13:31.109 ************************************ 00:13:31.109 START TEST nvmf_vfio_user_fuzz 00:13:31.109 ************************************ 00:13:31.109 19:14:08 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:31.368 * Looking for test storage... 00:13:31.368 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:31.368 19:14:08 -- target/vfio_user_fuzz.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:31.368 19:14:08 -- nvmf/common.sh@7 -- # uname -s 00:13:31.368 19:14:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:31.368 19:14:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:31.368 19:14:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:31.368 19:14:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:31.368 19:14:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:31.368 19:14:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:31.368 19:14:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:31.368 19:14:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:31.368 19:14:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:31.368 19:14:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:31.368 19:14:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:13:31.368 19:14:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:13:31.368 19:14:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:31.368 19:14:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:31.368 19:14:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:31.368 19:14:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:31.368 19:14:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:31.368 19:14:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:31.368 19:14:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:31.368 19:14:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.368 19:14:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.368 19:14:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.368 19:14:08 -- paths/export.sh@5 -- # export PATH 00:13:31.368 19:14:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.368 19:14:08 -- nvmf/common.sh@46 -- # : 0 00:13:31.368 19:14:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:31.368 19:14:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:31.368 19:14:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:31.368 19:14:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:31.368 19:14:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:31.368 19:14:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:31.368 19:14:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:31.368 19:14:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:31.368 19:14:08 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:31.368 19:14:08 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:31.368 19:14:08 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:31.368 19:14:08 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:31.368 19:14:08 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:31.368 19:14:08 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:31.368 19:14:08 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:31.368 19:14:08 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=70562 00:13:31.368 Process pid: 70562 00:13:31.368 19:14:08 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 70562' 00:13:31.368 19:14:08 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:31.368 19:14:08 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 70562 00:13:31.368 19:14:08 -- common/autotest_common.sh@817 -- # '[' -z 70562 ']' 00:13:31.368 19:14:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.368 19:14:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:31.368 19:14:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.368 19:14:08 -- target/vfio_user_fuzz.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:31.368 19:14:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:31.368 19:14:08 -- common/autotest_common.sh@10 -- # set +x 00:13:32.302 19:14:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:32.302 19:14:09 -- common/autotest_common.sh@850 -- # return 0 00:13:32.302 19:14:09 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:33.268 19:14:10 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:33.268 19:14:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.268 19:14:10 -- common/autotest_common.sh@10 -- # set +x 00:13:33.268 19:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.268 19:14:10 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:33.268 19:14:10 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:33.268 19:14:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.268 19:14:10 -- common/autotest_common.sh@10 -- # set +x 00:13:33.268 malloc0 00:13:33.268 19:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.268 19:14:10 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:33.268 19:14:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.268 19:14:10 -- common/autotest_common.sh@10 -- # set +x 00:13:33.268 19:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.268 19:14:10 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:33.268 19:14:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.268 19:14:10 -- common/autotest_common.sh@10 -- # set +x 00:13:33.268 19:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.268 19:14:10 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:33.268 19:14:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.268 19:14:10 -- common/autotest_common.sh@10 -- # set +x 00:13:33.268 19:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.268 19:14:10 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:33.268 19:14:10 -- target/vfio_user_fuzz.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/vfio_user_fuzz -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:33.834 Shutting down the fuzz application 00:13:33.834 19:14:11 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:33.834 19:14:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.834 19:14:11 -- common/autotest_common.sh@10 -- # set +x 00:13:33.834 19:14:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.834 19:14:11 -- target/vfio_user_fuzz.sh@46 -- # killprocess 70562 00:13:33.834 19:14:11 -- common/autotest_common.sh@924 -- # '[' -z 70562 ']' 00:13:33.834 19:14:11 -- common/autotest_common.sh@928 -- # kill -0 70562 00:13:33.834 19:14:11 -- common/autotest_common.sh@929 -- # uname 00:13:33.834 19:14:11 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:13:33.834 19:14:11 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 70562 00:13:33.834 19:14:11 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:13:33.834 19:14:11 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:13:33.834 killing process with pid 70562 00:13:33.834 19:14:11 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 70562' 00:13:33.834 19:14:11 -- common/autotest_common.sh@943 -- # kill 70562 00:13:33.834 19:14:11 -- common/autotest_common.sh@948 -- # wait 70562 00:13:34.092 19:14:11 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_log.txt /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:34.092 19:14:11 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:34.092 00:13:34.092 real 0m3.021s 00:13:34.092 user 0m3.411s 00:13:34.092 sys 0m0.406s 00:13:34.092 19:14:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:34.093 19:14:11 -- common/autotest_common.sh@10 -- # set +x 00:13:34.093 ************************************ 00:13:34.093 END TEST nvmf_vfio_user_fuzz 00:13:34.093 ************************************ 00:13:34.351 19:14:11 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:34.351 19:14:11 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:13:34.351 19:14:11 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:34.351 19:14:11 -- common/autotest_common.sh@10 -- # set +x 00:13:34.351 ************************************ 00:13:34.351 START TEST nvmf_host_management 00:13:34.351 ************************************ 00:13:34.351 19:14:11 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:34.351 * Looking for test storage... 00:13:34.351 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:34.351 19:14:11 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:34.351 19:14:11 -- nvmf/common.sh@7 -- # uname -s 00:13:34.351 19:14:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:34.351 19:14:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:34.351 19:14:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:34.351 19:14:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:34.351 19:14:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:34.351 19:14:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:34.351 19:14:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:34.351 19:14:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:34.351 19:14:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:34.351 19:14:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:34.351 19:14:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:13:34.351 19:14:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:13:34.351 19:14:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:34.351 19:14:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:34.351 19:14:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:34.351 19:14:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:34.351 19:14:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:34.351 19:14:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:34.351 19:14:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:34.351 19:14:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.351 19:14:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.351 19:14:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.351 19:14:11 -- paths/export.sh@5 -- # export PATH 00:13:34.351 19:14:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.351 19:14:11 -- nvmf/common.sh@46 -- # : 0 00:13:34.351 19:14:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:34.351 19:14:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:34.351 19:14:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:34.351 19:14:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:34.351 19:14:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:34.351 19:14:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:34.351 19:14:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:34.351 19:14:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:34.351 19:14:11 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:34.351 19:14:11 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:34.351 19:14:11 -- target/host_management.sh@104 -- # nvmftestinit 00:13:34.351 19:14:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:34.351 19:14:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:34.351 19:14:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:34.352 19:14:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:34.352 19:14:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:34.352 19:14:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.352 19:14:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.352 19:14:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.352 19:14:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:34.352 19:14:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:34.352 19:14:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:34.352 19:14:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:34.352 19:14:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:34.352 19:14:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:34.352 19:14:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:34.352 19:14:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:34.352 19:14:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:34.352 19:14:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:34.352 19:14:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:34.352 19:14:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:34.352 19:14:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:34.352 19:14:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:34.352 19:14:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:34.352 19:14:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:34.352 19:14:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:34.352 19:14:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:34.352 19:14:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:34.352 19:14:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:34.352 Cannot find device "nvmf_tgt_br" 00:13:34.352 19:14:11 -- nvmf/common.sh@154 -- # true 00:13:34.352 19:14:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:34.352 Cannot find device "nvmf_tgt_br2" 00:13:34.352 19:14:11 -- nvmf/common.sh@155 -- # true 00:13:34.352 19:14:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:34.352 19:14:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:34.352 Cannot find device "nvmf_tgt_br" 00:13:34.352 19:14:11 -- nvmf/common.sh@157 -- # true 00:13:34.352 19:14:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:34.352 Cannot find device "nvmf_tgt_br2" 00:13:34.352 19:14:11 -- nvmf/common.sh@158 -- # true 00:13:34.352 19:14:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:34.611 19:14:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:34.611 19:14:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:34.611 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:34.611 19:14:11 -- nvmf/common.sh@161 -- # true 00:13:34.611 19:14:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:34.611 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:34.611 19:14:11 -- nvmf/common.sh@162 -- # true 00:13:34.611 19:14:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:34.611 19:14:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:34.611 19:14:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:34.611 19:14:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:34.611 19:14:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:34.611 19:14:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:34.611 19:14:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:34.611 19:14:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:34.611 19:14:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:34.611 19:14:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:34.611 19:14:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:34.611 19:14:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:34.611 19:14:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:34.611 19:14:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:34.611 19:14:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:34.611 19:14:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:34.611 19:14:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:34.611 19:14:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:34.611 19:14:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:34.611 19:14:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:34.611 19:14:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:34.611 19:14:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:34.611 19:14:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:34.611 19:14:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:34.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:34.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:13:34.611 00:13:34.611 --- 10.0.0.2 ping statistics --- 00:13:34.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.611 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:13:34.611 19:14:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:34.611 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:34.611 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:13:34.611 00:13:34.611 --- 10.0.0.3 ping statistics --- 00:13:34.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.611 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:13:34.611 19:14:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:34.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:34.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:13:34.611 00:13:34.611 --- 10.0.0.1 ping statistics --- 00:13:34.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.611 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:13:34.611 19:14:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:34.611 19:14:12 -- nvmf/common.sh@421 -- # return 0 00:13:34.611 19:14:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:34.611 19:14:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:34.611 19:14:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:34.611 19:14:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:34.611 19:14:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:34.611 19:14:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:34.611 19:14:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:34.870 19:14:12 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:13:34.870 19:14:12 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:13:34.870 19:14:12 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:34.870 19:14:12 -- common/autotest_common.sh@10 -- # set +x 00:13:34.870 ************************************ 00:13:34.870 START TEST nvmf_host_management 00:13:34.870 ************************************ 00:13:34.870 19:14:12 -- common/autotest_common.sh@1102 -- # nvmf_host_management 00:13:34.870 19:14:12 -- target/host_management.sh@69 -- # starttarget 00:13:34.870 19:14:12 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:34.870 19:14:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:34.870 19:14:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:34.870 19:14:12 -- common/autotest_common.sh@10 -- # set +x 00:13:34.870 19:14:12 -- nvmf/common.sh@469 -- # nvmfpid=70793 00:13:34.870 19:14:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:34.870 19:14:12 -- nvmf/common.sh@470 -- # waitforlisten 70793 00:13:34.870 19:14:12 -- common/autotest_common.sh@817 -- # '[' -z 70793 ']' 00:13:34.870 19:14:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.870 19:14:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:34.870 19:14:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.870 19:14:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:34.870 19:14:12 -- common/autotest_common.sh@10 -- # set +x 00:13:34.870 [2024-02-14 19:14:12.107099] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:13:34.870 [2024-02-14 19:14:12.107199] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.870 [2024-02-14 19:14:12.246963] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:35.129 [2024-02-14 19:14:12.387774] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:35.129 [2024-02-14 19:14:12.387980] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.129 [2024-02-14 19:14:12.387996] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.129 [2024-02-14 19:14:12.388008] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.129 [2024-02-14 19:14:12.388185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:35.129 [2024-02-14 19:14:12.388713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:35.129 [2024-02-14 19:14:12.388835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:35.129 [2024-02-14 19:14:12.388842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.062 19:14:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:36.062 19:14:13 -- common/autotest_common.sh@850 -- # return 0 00:13:36.062 19:14:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:36.062 19:14:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:36.062 19:14:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.062 19:14:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.062 19:14:13 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:36.062 19:14:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.062 19:14:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.062 [2024-02-14 19:14:13.161716] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:36.062 19:14:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.063 19:14:13 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:36.063 19:14:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:36.063 19:14:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.063 19:14:13 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:36.063 19:14:13 -- target/host_management.sh@23 -- # cat 00:13:36.063 19:14:13 -- target/host_management.sh@30 -- # rpc_cmd 00:13:36.063 19:14:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.063 19:14:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.063 Malloc0 00:13:36.063 [2024-02-14 19:14:13.247796] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.063 19:14:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.063 19:14:13 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:36.063 19:14:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:36.063 19:14:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.063 19:14:13 -- target/host_management.sh@73 -- # perfpid=70865 00:13:36.063 19:14:13 -- target/host_management.sh@74 -- # waitforlisten 70865 /var/tmp/bdevperf.sock 00:13:36.063 19:14:13 -- common/autotest_common.sh@817 -- # '[' -z 70865 ']' 00:13:36.063 19:14:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:36.063 19:14:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:36.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:36.063 19:14:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:36.063 19:14:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:36.063 19:14:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.063 19:14:13 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:36.063 19:14:13 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:36.063 19:14:13 -- nvmf/common.sh@520 -- # config=() 00:13:36.063 19:14:13 -- nvmf/common.sh@520 -- # local subsystem config 00:13:36.063 19:14:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:36.063 19:14:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:36.063 { 00:13:36.063 "params": { 00:13:36.063 "name": "Nvme$subsystem", 00:13:36.063 "trtype": "$TEST_TRANSPORT", 00:13:36.063 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:36.063 "adrfam": "ipv4", 00:13:36.063 "trsvcid": "$NVMF_PORT", 00:13:36.063 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:36.063 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:36.063 "hdgst": ${hdgst:-false}, 00:13:36.063 "ddgst": ${ddgst:-false} 00:13:36.063 }, 00:13:36.063 "method": "bdev_nvme_attach_controller" 00:13:36.063 } 00:13:36.063 EOF 00:13:36.063 )") 00:13:36.063 19:14:13 -- nvmf/common.sh@542 -- # cat 00:13:36.063 19:14:13 -- nvmf/common.sh@544 -- # jq . 00:13:36.063 19:14:13 -- nvmf/common.sh@545 -- # IFS=, 00:13:36.063 19:14:13 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:36.063 "params": { 00:13:36.063 "name": "Nvme0", 00:13:36.063 "trtype": "tcp", 00:13:36.063 "traddr": "10.0.0.2", 00:13:36.063 "adrfam": "ipv4", 00:13:36.063 "trsvcid": "4420", 00:13:36.063 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:36.063 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:36.063 "hdgst": false, 00:13:36.063 "ddgst": false 00:13:36.063 }, 00:13:36.063 "method": "bdev_nvme_attach_controller" 00:13:36.063 }' 00:13:36.063 [2024-02-14 19:14:13.349173] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:13:36.063 [2024-02-14 19:14:13.349265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70865 ] 00:13:36.321 [2024-02-14 19:14:13.490915] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.321 [2024-02-14 19:14:13.617553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.321 [2024-02-14 19:14:13.617641] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:13:36.579 Running I/O for 10 seconds... 00:13:37.149 19:14:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:37.149 19:14:14 -- common/autotest_common.sh@850 -- # return 0 00:13:37.149 19:14:14 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:37.149 19:14:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:37.149 19:14:14 -- common/autotest_common.sh@10 -- # set +x 00:13:37.149 19:14:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:37.149 19:14:14 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:37.149 19:14:14 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:37.149 19:14:14 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:37.149 19:14:14 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:37.149 19:14:14 -- target/host_management.sh@52 -- # local ret=1 00:13:37.149 19:14:14 -- target/host_management.sh@53 -- # local i 00:13:37.149 19:14:14 -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:37.149 19:14:14 -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:37.149 19:14:14 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:37.149 19:14:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:37.149 19:14:14 -- common/autotest_common.sh@10 -- # set +x 00:13:37.149 19:14:14 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:37.149 19:14:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:37.149 19:14:14 -- target/host_management.sh@55 -- # read_io_count=1806 00:13:37.149 19:14:14 -- target/host_management.sh@58 -- # '[' 1806 -ge 100 ']' 00:13:37.149 19:14:14 -- target/host_management.sh@59 -- # ret=0 00:13:37.149 19:14:14 -- target/host_management.sh@60 -- # break 00:13:37.149 19:14:14 -- target/host_management.sh@64 -- # return 0 00:13:37.149 19:14:14 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:37.149 19:14:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:37.149 19:14:14 -- common/autotest_common.sh@10 -- # set +x 00:13:37.149 [2024-02-14 19:14:14.396865] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.149 [2024-02-14 19:14:14.396912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.149 [2024-02-14 19:14:14.396924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.149 [2024-02-14 19:14:14.396933] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.149 [2024-02-14 19:14:14.396942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.149 [2024-02-14 19:14:14.396960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.149 [2024-02-14 19:14:14.396969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.149 [2024-02-14 19:14:14.396978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.149 [2024-02-14 19:14:14.396986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.149 [2024-02-14 19:14:14.396994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.149 [2024-02-14 19:14:14.397003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.149 [2024-02-14 19:14:14.397011] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.149 [2024-02-14 19:14:14.397020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.149 [2024-02-14 19:14:14.397028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.149 [2024-02-14 19:14:14.397037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.149 [2024-02-14 19:14:14.397045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.149 [2024-02-14 19:14:14.397053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.149 [2024-02-14 19:14:14.397061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.149 [2024-02-14 19:14:14.397069] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.149 [2024-02-14 19:14:14.397076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.149 [2024-02-14 19:14:14.397084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.149 [2024-02-14 19:14:14.397092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.149 [2024-02-14 19:14:14.397101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.150 [2024-02-14 19:14:14.397108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.150 [2024-02-14 19:14:14.397116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.150 [2024-02-14 19:14:14.397124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.150 [2024-02-14 19:14:14.397131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.150 [2024-02-14 19:14:14.397140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.150 [2024-02-14 19:14:14.397148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.150 [2024-02-14 19:14:14.397156] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.150 [2024-02-14 19:14:14.397164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.150 [2024-02-14 19:14:14.397172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.150 [2024-02-14 19:14:14.397181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.150 [2024-02-14 19:14:14.397189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.150 [2024-02-14 19:14:14.397197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.150 [2024-02-14 19:14:14.397205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.150 [2024-02-14 19:14:14.397214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.150 [2024-02-14 19:14:14.397222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.150 [2024-02-14 19:14:14.397230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.150 [2024-02-14 19:14:14.397238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.150 [2024-02-14 19:14:14.397246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.150 [2024-02-14 19:14:14.397254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.150 [2024-02-14 19:14:14.397262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.150 [2024-02-14 19:14:14.397271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.150 [2024-02-14 19:14:14.397279] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.150 [2024-02-14 19:14:14.397287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.150 [2024-02-14 19:14:14.397304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.150 [2024-02-14 19:14:14.397313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.150 [2024-02-14 19:14:14.397321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0340 is same with the state(5) to be set 00:13:37.150 [2024-02-14 19:14:14.397603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.150 [2024-02-14 19:14:14.397644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.150 [2024-02-14 19:14:14.397668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.150 [2024-02-14 19:14:14.397679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.150 [2024-02-14 19:14:14.397692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.150 [2024-02-14 19:14:14.397701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.150 [2024-02-14 19:14:14.397713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.150 [2024-02-14 19:14:14.397723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.150 [2024-02-14 19:14:14.397734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.150 [2024-02-14 19:14:14.397744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.150 [2024-02-14 19:14:14.397755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.150 [2024-02-14 19:14:14.397764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.150 [2024-02-14 19:14:14.397775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.150 [2024-02-14 19:14:14.397785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.150 [2024-02-14 19:14:14.397796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.150 [2024-02-14 19:14:14.397805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.150 [2024-02-14 19:14:14.397817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.150 [2024-02-14 19:14:14.397825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.150 [2024-02-14 19:14:14.397837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.150 [2024-02-14 19:14:14.397845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.150 [2024-02-14 19:14:14.397857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.150 [2024-02-14 19:14:14.397866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.150 [2024-02-14 19:14:14.397877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.150 [2024-02-14 19:14:14.397886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.150 [2024-02-14 19:14:14.397897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.150 [2024-02-14 19:14:14.397906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.150 [2024-02-14 19:14:14.397917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.150 [2024-02-14 19:14:14.397926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.150 [2024-02-14 19:14:14.397943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.150 [2024-02-14 19:14:14.397952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.150 [2024-02-14 19:14:14.397964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.150 [2024-02-14 19:14:14.397973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.150 [2024-02-14 19:14:14.397984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.150 [2024-02-14 19:14:14.397993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.150 [2024-02-14 19:14:14.398004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.150 [2024-02-14 19:14:14.398013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.150 [2024-02-14 19:14:14.398024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.150 [2024-02-14 19:14:14.398033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.150 [2024-02-14 19:14:14.398043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.150 [2024-02-14 19:14:14.398052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.150 [2024-02-14 19:14:14.398063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.150 [2024-02-14 19:14:14.398072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.150 [2024-02-14 19:14:14.398083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.150 [2024-02-14 19:14:14.398092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.150 [2024-02-14 19:14:14.398104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.150 [2024-02-14 19:14:14.398113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.150 [2024-02-14 19:14:14.398124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.150 [2024-02-14 19:14:14.398133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.150 [2024-02-14 19:14:14.398144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.150 [2024-02-14 19:14:14.398153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.150 [2024-02-14 19:14:14.398164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.150 [2024-02-14 19:14:14.398173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.150 [2024-02-14 19:14:14.398183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.150 [2024-02-14 19:14:14.398192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:37.151 [2024-02-14 19:14:14.398980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.151 [2024-02-14 19:14:14.398991] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffa40 is same with the state(5) to be set 00:13:37.151 [2024-02-14 19:14:14.399060] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21ffa40 was disconnected and freed. reset controller. 00:13:37.151 [2024-02-14 19:14:14.400230] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:37.152 task offset: 118912 on job bdev=Nvme0n1 fails 00:13:37.152 00:13:37.152 Latency(us) 00:13:37.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.152 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:37.152 Job: Nvme0n1 ended in about 0.60 seconds with error 00:13:37.152 Verification LBA range: start 0x0 length 0x400 00:13:37.152 Nvme0n1 : 0.60 3235.69 202.23 106.91 0.00 18760.76 2159.71 28359.21 00:13:37.152 =================================================================================================================== 00:13:37.152 Total : 3235.69 202.23 106.91 0.00 18760.76 2159.71 28359.21 00:13:37.152 [2024-02-14 19:14:14.402537] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:37.152 [2024-02-14 19:14:14.402570] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21fed60 (9): Bad file descriptor 00:13:37.152 [2024-02-14 19:14:14.402613] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:13:37.152 [2024-02-14 19:14:14.404689] ctrlr.c: 742:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:13:37.152 [2024-02-14 19:14:14.404890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:37.152 [2024-02-14 19:14:14.404919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.152 [2024-02-14 19:14:14.404937] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:13:37.152 [2024-02-14 19:14:14.404948] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:13:37.152 [2024-02-14 19:14:14.404957] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:13:37.152 [2024-02-14 19:14:14.404966] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21fed60 00:13:37.152 [2024-02-14 19:14:14.405001] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21fed60 (9): Bad file descriptor 00:13:37.152 [2024-02-14 19:14:14.405019] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:13:37.152 [2024-02-14 19:14:14.405029] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:13:37.152 [2024-02-14 19:14:14.405039] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:13:37.152 [2024-02-14 19:14:14.405055] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:13:37.152 19:14:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:37.152 19:14:14 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:37.152 19:14:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:37.152 19:14:14 -- common/autotest_common.sh@10 -- # set +x 00:13:37.152 19:14:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:37.152 19:14:14 -- target/host_management.sh@87 -- # sleep 1 00:13:38.086 19:14:15 -- target/host_management.sh@91 -- # kill -9 70865 00:13:38.086 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (70865) - No such process 00:13:38.086 19:14:15 -- target/host_management.sh@91 -- # true 00:13:38.086 19:14:15 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:38.086 19:14:15 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:38.086 19:14:15 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:38.086 19:14:15 -- nvmf/common.sh@520 -- # config=() 00:13:38.086 19:14:15 -- nvmf/common.sh@520 -- # local subsystem config 00:13:38.086 19:14:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:38.086 19:14:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:38.086 { 00:13:38.086 "params": { 00:13:38.086 "name": "Nvme$subsystem", 00:13:38.086 "trtype": "$TEST_TRANSPORT", 00:13:38.086 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:38.086 "adrfam": "ipv4", 00:13:38.086 "trsvcid": "$NVMF_PORT", 00:13:38.086 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:38.086 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:38.086 "hdgst": ${hdgst:-false}, 00:13:38.086 "ddgst": ${ddgst:-false} 00:13:38.086 }, 00:13:38.086 "method": "bdev_nvme_attach_controller" 00:13:38.086 } 00:13:38.086 EOF 00:13:38.086 )") 00:13:38.086 19:14:15 -- nvmf/common.sh@542 -- # cat 00:13:38.086 19:14:15 -- nvmf/common.sh@544 -- # jq . 00:13:38.086 19:14:15 -- nvmf/common.sh@545 -- # IFS=, 00:13:38.086 19:14:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:38.086 "params": { 00:13:38.086 "name": "Nvme0", 00:13:38.086 "trtype": "tcp", 00:13:38.086 "traddr": "10.0.0.2", 00:13:38.086 "adrfam": "ipv4", 00:13:38.086 "trsvcid": "4420", 00:13:38.086 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:38.086 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:38.086 "hdgst": false, 00:13:38.086 "ddgst": false 00:13:38.086 }, 00:13:38.086 "method": "bdev_nvme_attach_controller" 00:13:38.086 }' 00:13:38.086 [2024-02-14 19:14:15.465668] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:13:38.086 [2024-02-14 19:14:15.465759] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70917 ] 00:13:38.345 [2024-02-14 19:14:15.599776] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.345 [2024-02-14 19:14:15.721528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.345 [2024-02-14 19:14:15.721617] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:13:38.603 Running I/O for 1 seconds... 00:13:39.536 00:13:39.536 Latency(us) 00:13:39.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:39.536 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:39.536 Verification LBA range: start 0x0 length 0x400 00:13:39.536 Nvme0n1 : 1.01 3285.20 205.32 0.00 0.00 19143.79 1199.01 24546.21 00:13:39.536 =================================================================================================================== 00:13:39.536 Total : 3285.20 205.32 0.00 0.00 19143.79 1199.01 24546.21 00:13:39.536 [2024-02-14 19:14:16.910994] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:13:39.793 19:14:17 -- target/host_management.sh@101 -- # stoptarget 00:13:39.793 19:14:17 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:39.793 19:14:17 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:13:39.793 19:14:17 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:39.793 19:14:17 -- target/host_management.sh@40 -- # nvmftestfini 00:13:39.793 19:14:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:39.793 19:14:17 -- nvmf/common.sh@116 -- # sync 00:13:40.050 19:14:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:40.050 19:14:17 -- nvmf/common.sh@119 -- # set +e 00:13:40.050 19:14:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:40.050 19:14:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:40.050 rmmod nvme_tcp 00:13:40.050 rmmod nvme_fabrics 00:13:40.050 rmmod nvme_keyring 00:13:40.050 19:14:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:40.050 19:14:17 -- nvmf/common.sh@123 -- # set -e 00:13:40.050 19:14:17 -- nvmf/common.sh@124 -- # return 0 00:13:40.050 19:14:17 -- nvmf/common.sh@477 -- # '[' -n 70793 ']' 00:13:40.050 19:14:17 -- nvmf/common.sh@478 -- # killprocess 70793 00:13:40.050 19:14:17 -- common/autotest_common.sh@924 -- # '[' -z 70793 ']' 00:13:40.050 19:14:17 -- common/autotest_common.sh@928 -- # kill -0 70793 00:13:40.050 19:14:17 -- common/autotest_common.sh@929 -- # uname 00:13:40.050 19:14:17 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:13:40.050 19:14:17 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 70793 00:13:40.050 killing process with pid 70793 00:13:40.050 19:14:17 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:13:40.050 19:14:17 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:13:40.050 19:14:17 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 70793' 00:13:40.050 19:14:17 -- common/autotest_common.sh@943 -- # kill 70793 00:13:40.050 19:14:17 -- common/autotest_common.sh@948 -- # wait 70793 00:13:40.308 [2024-02-14 19:14:17.714276] app.c: 603:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:40.566 19:14:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:40.566 19:14:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:40.566 19:14:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:40.566 19:14:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:40.566 19:14:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:40.566 19:14:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.566 19:14:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:40.566 19:14:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.566 19:14:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:40.566 00:13:40.566 real 0m5.737s 00:13:40.566 user 0m23.642s 00:13:40.566 sys 0m1.307s 00:13:40.566 19:14:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:40.566 19:14:17 -- common/autotest_common.sh@10 -- # set +x 00:13:40.566 ************************************ 00:13:40.566 END TEST nvmf_host_management 00:13:40.566 ************************************ 00:13:40.566 19:14:17 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:13:40.566 00:13:40.566 real 0m6.265s 00:13:40.566 user 0m23.774s 00:13:40.566 sys 0m1.562s 00:13:40.566 19:14:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:40.566 19:14:17 -- common/autotest_common.sh@10 -- # set +x 00:13:40.566 ************************************ 00:13:40.566 END TEST nvmf_host_management 00:13:40.566 ************************************ 00:13:40.566 19:14:17 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:40.566 19:14:17 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:13:40.566 19:14:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:40.566 19:14:17 -- common/autotest_common.sh@10 -- # set +x 00:13:40.566 ************************************ 00:13:40.566 START TEST nvmf_lvol 00:13:40.566 ************************************ 00:13:40.566 19:14:17 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:40.566 * Looking for test storage... 00:13:40.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:40.566 19:14:17 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:40.566 19:14:17 -- nvmf/common.sh@7 -- # uname -s 00:13:40.566 19:14:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:40.566 19:14:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:40.566 19:14:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:40.566 19:14:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:40.566 19:14:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:40.566 19:14:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:40.566 19:14:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:40.566 19:14:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:40.566 19:14:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:40.566 19:14:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:40.566 19:14:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:13:40.566 19:14:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:13:40.566 19:14:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:40.566 19:14:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:40.566 19:14:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:40.566 19:14:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:40.566 19:14:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.566 19:14:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.566 19:14:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.566 19:14:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.566 19:14:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.566 19:14:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.566 19:14:17 -- paths/export.sh@5 -- # export PATH 00:13:40.566 19:14:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.566 19:14:17 -- nvmf/common.sh@46 -- # : 0 00:13:40.566 19:14:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:40.566 19:14:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:40.567 19:14:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:40.567 19:14:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:40.567 19:14:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:40.567 19:14:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:40.567 19:14:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:40.567 19:14:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:40.824 19:14:17 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:40.824 19:14:17 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:40.824 19:14:17 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:40.824 19:14:17 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:40.824 19:14:17 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:40.824 19:14:17 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:40.824 19:14:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:40.824 19:14:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:40.824 19:14:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:40.824 19:14:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:40.824 19:14:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:40.824 19:14:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.824 19:14:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:40.824 19:14:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.824 19:14:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:40.824 19:14:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:40.824 19:14:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:40.824 19:14:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:40.824 19:14:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:40.824 19:14:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:40.824 19:14:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:40.824 19:14:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:40.824 19:14:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:40.824 19:14:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:40.824 19:14:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:40.824 19:14:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:40.824 19:14:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:40.824 19:14:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:40.824 19:14:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:40.824 19:14:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:40.824 19:14:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:40.824 19:14:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:40.824 19:14:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:40.824 19:14:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:40.824 Cannot find device "nvmf_tgt_br" 00:13:40.824 19:14:18 -- nvmf/common.sh@154 -- # true 00:13:40.824 19:14:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:40.824 Cannot find device "nvmf_tgt_br2" 00:13:40.824 19:14:18 -- nvmf/common.sh@155 -- # true 00:13:40.824 19:14:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:40.824 19:14:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:40.824 Cannot find device "nvmf_tgt_br" 00:13:40.824 19:14:18 -- nvmf/common.sh@157 -- # true 00:13:40.824 19:14:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:40.824 Cannot find device "nvmf_tgt_br2" 00:13:40.824 19:14:18 -- nvmf/common.sh@158 -- # true 00:13:40.824 19:14:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:40.824 19:14:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:40.824 19:14:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:40.824 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:40.824 19:14:18 -- nvmf/common.sh@161 -- # true 00:13:40.824 19:14:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:40.824 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:40.824 19:14:18 -- nvmf/common.sh@162 -- # true 00:13:40.824 19:14:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:40.824 19:14:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:40.825 19:14:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:40.825 19:14:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:40.825 19:14:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:40.825 19:14:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:40.825 19:14:18 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:40.825 19:14:18 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:40.825 19:14:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:40.825 19:14:18 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:40.825 19:14:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:40.825 19:14:18 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:40.825 19:14:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:40.825 19:14:18 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:40.825 19:14:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:40.825 19:14:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:40.825 19:14:18 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:40.825 19:14:18 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:40.825 19:14:18 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:41.082 19:14:18 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:41.082 19:14:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:41.082 19:14:18 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:41.082 19:14:18 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:41.082 19:14:18 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:41.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:41.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:13:41.082 00:13:41.082 --- 10.0.0.2 ping statistics --- 00:13:41.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.082 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:13:41.082 19:14:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:41.082 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:41.082 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:13:41.082 00:13:41.082 --- 10.0.0.3 ping statistics --- 00:13:41.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.082 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:13:41.082 19:14:18 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:41.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:41.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:13:41.082 00:13:41.082 --- 10.0.0.1 ping statistics --- 00:13:41.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.082 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:13:41.082 19:14:18 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:41.082 19:14:18 -- nvmf/common.sh@421 -- # return 0 00:13:41.082 19:14:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:41.082 19:14:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:41.082 19:14:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:41.082 19:14:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:41.082 19:14:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:41.082 19:14:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:41.082 19:14:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:41.082 19:14:18 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:41.082 19:14:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:41.082 19:14:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:41.082 19:14:18 -- common/autotest_common.sh@10 -- # set +x 00:13:41.082 19:14:18 -- nvmf/common.sh@469 -- # nvmfpid=71142 00:13:41.082 19:14:18 -- nvmf/common.sh@470 -- # waitforlisten 71142 00:13:41.082 19:14:18 -- common/autotest_common.sh@817 -- # '[' -z 71142 ']' 00:13:41.082 19:14:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.082 19:14:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:41.082 19:14:18 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:41.082 19:14:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.082 19:14:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:41.082 19:14:18 -- common/autotest_common.sh@10 -- # set +x 00:13:41.082 [2024-02-14 19:14:18.376521] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:13:41.082 [2024-02-14 19:14:18.376627] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:41.340 [2024-02-14 19:14:18.512360] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:41.340 [2024-02-14 19:14:18.631748] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:41.340 [2024-02-14 19:14:18.631902] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:41.340 [2024-02-14 19:14:18.631914] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:41.340 [2024-02-14 19:14:18.631923] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:41.340 [2024-02-14 19:14:18.632099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.340 [2024-02-14 19:14:18.632758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.340 [2024-02-14 19:14:18.632766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.273 19:14:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:42.273 19:14:19 -- common/autotest_common.sh@850 -- # return 0 00:13:42.273 19:14:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:42.273 19:14:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:42.273 19:14:19 -- common/autotest_common.sh@10 -- # set +x 00:13:42.273 19:14:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:42.273 19:14:19 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:42.273 [2024-02-14 19:14:19.622939] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:42.273 19:14:19 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:42.839 19:14:19 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:42.839 19:14:19 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:42.839 19:14:20 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:42.839 19:14:20 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:43.097 19:14:20 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:43.356 19:14:20 -- target/nvmf_lvol.sh@29 -- # lvs=1eeca1c0-41db-4023-bd5e-34cd9b013a76 00:13:43.356 19:14:20 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1eeca1c0-41db-4023-bd5e-34cd9b013a76 lvol 20 00:13:43.613 19:14:21 -- target/nvmf_lvol.sh@32 -- # lvol=7a782b97-076a-4c9c-a8c5-9c4a100c419c 00:13:43.613 19:14:21 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:44.180 19:14:21 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7a782b97-076a-4c9c-a8c5-9c4a100c419c 00:13:44.180 19:14:21 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:44.438 [2024-02-14 19:14:21.748892] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.438 19:14:21 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:44.696 19:14:22 -- target/nvmf_lvol.sh@42 -- # perf_pid=71290 00:13:44.696 19:14:22 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:44.696 19:14:22 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:45.642 19:14:23 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 7a782b97-076a-4c9c-a8c5-9c4a100c419c MY_SNAPSHOT 00:13:46.224 19:14:23 -- target/nvmf_lvol.sh@47 -- # snapshot=6854be6c-9bd3-4b86-8ef3-cd788e9cc809 00:13:46.224 19:14:23 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 7a782b97-076a-4c9c-a8c5-9c4a100c419c 30 00:13:46.481 19:14:23 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 6854be6c-9bd3-4b86-8ef3-cd788e9cc809 MY_CLONE 00:13:46.739 19:14:23 -- target/nvmf_lvol.sh@49 -- # clone=c8412aef-55ef-4a3e-9d0c-033a15d0d51d 00:13:46.739 19:14:23 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate c8412aef-55ef-4a3e-9d0c-033a15d0d51d 00:13:47.671 19:14:24 -- target/nvmf_lvol.sh@53 -- # wait 71290 00:13:55.774 Initializing NVMe Controllers 00:13:55.774 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:55.774 Controller IO queue size 128, less than required. 00:13:55.774 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:55.774 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:55.774 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:55.774 Initialization complete. Launching workers. 00:13:55.774 ======================================================== 00:13:55.774 Latency(us) 00:13:55.774 Device Information : IOPS MiB/s Average min max 00:13:55.774 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 6809.20 26.60 18805.79 307.74 111036.56 00:13:55.774 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7222.30 28.21 17735.92 2765.43 100532.48 00:13:55.774 ======================================================== 00:13:55.774 Total : 14031.50 54.81 18255.11 307.74 111036.56 00:13:55.774 00:13:55.774 19:14:32 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:55.774 19:14:32 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7a782b97-076a-4c9c-a8c5-9c4a100c419c 00:13:55.774 19:14:32 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1eeca1c0-41db-4023-bd5e-34cd9b013a76 00:13:55.774 19:14:33 -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:55.774 19:14:33 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:55.774 19:14:33 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:55.774 19:14:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:55.774 19:14:33 -- nvmf/common.sh@116 -- # sync 00:13:55.774 19:14:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:55.774 19:14:33 -- nvmf/common.sh@119 -- # set +e 00:13:55.774 19:14:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:55.774 19:14:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:55.774 rmmod nvme_tcp 00:13:55.774 rmmod nvme_fabrics 00:13:55.774 rmmod nvme_keyring 00:13:56.033 19:14:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:56.033 19:14:33 -- nvmf/common.sh@123 -- # set -e 00:13:56.033 19:14:33 -- nvmf/common.sh@124 -- # return 0 00:13:56.033 19:14:33 -- nvmf/common.sh@477 -- # '[' -n 71142 ']' 00:13:56.033 19:14:33 -- nvmf/common.sh@478 -- # killprocess 71142 00:13:56.033 19:14:33 -- common/autotest_common.sh@924 -- # '[' -z 71142 ']' 00:13:56.033 19:14:33 -- common/autotest_common.sh@928 -- # kill -0 71142 00:13:56.033 19:14:33 -- common/autotest_common.sh@929 -- # uname 00:13:56.033 19:14:33 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:13:56.033 19:14:33 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 71142 00:13:56.033 19:14:33 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:13:56.033 19:14:33 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:13:56.033 killing process with pid 71142 00:13:56.033 19:14:33 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 71142' 00:13:56.033 19:14:33 -- common/autotest_common.sh@943 -- # kill 71142 00:13:56.033 19:14:33 -- common/autotest_common.sh@948 -- # wait 71142 00:13:56.291 19:14:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:56.291 19:14:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:56.291 19:14:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:56.291 19:14:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:56.291 19:14:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:56.291 19:14:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.291 19:14:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:56.291 19:14:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.291 19:14:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:56.291 ************************************ 00:13:56.291 END TEST nvmf_lvol 00:13:56.291 ************************************ 00:13:56.291 00:13:56.291 real 0m15.719s 00:13:56.291 user 1m5.813s 00:13:56.291 sys 0m3.673s 00:13:56.291 19:14:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:56.291 19:14:33 -- common/autotest_common.sh@10 -- # set +x 00:13:56.291 19:14:33 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:56.291 19:14:33 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:13:56.291 19:14:33 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:56.291 19:14:33 -- common/autotest_common.sh@10 -- # set +x 00:13:56.291 ************************************ 00:13:56.291 START TEST nvmf_lvs_grow 00:13:56.291 ************************************ 00:13:56.291 19:14:33 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:56.549 * Looking for test storage... 00:13:56.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:56.549 19:14:33 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:56.549 19:14:33 -- nvmf/common.sh@7 -- # uname -s 00:13:56.549 19:14:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.549 19:14:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.549 19:14:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.549 19:14:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.549 19:14:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.549 19:14:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.549 19:14:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.549 19:14:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.549 19:14:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.549 19:14:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.549 19:14:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:13:56.549 19:14:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:13:56.549 19:14:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.550 19:14:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.550 19:14:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:56.550 19:14:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:56.550 19:14:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.550 19:14:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.550 19:14:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.550 19:14:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.550 19:14:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.550 19:14:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.550 19:14:33 -- paths/export.sh@5 -- # export PATH 00:13:56.550 19:14:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.550 19:14:33 -- nvmf/common.sh@46 -- # : 0 00:13:56.550 19:14:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:56.550 19:14:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:56.550 19:14:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:56.550 19:14:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.550 19:14:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.550 19:14:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:56.550 19:14:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:56.550 19:14:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:56.550 19:14:33 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:56.550 19:14:33 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:56.550 19:14:33 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:13:56.550 19:14:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:56.550 19:14:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.550 19:14:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:56.550 19:14:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:56.550 19:14:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:56.550 19:14:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.550 19:14:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:56.550 19:14:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.550 19:14:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:56.550 19:14:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:56.550 19:14:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:56.550 19:14:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:56.550 19:14:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:56.550 19:14:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:56.550 19:14:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:56.550 19:14:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:56.550 19:14:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:56.550 19:14:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:56.550 19:14:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:56.550 19:14:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:56.550 19:14:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:56.550 19:14:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:56.550 19:14:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:56.550 19:14:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:56.550 19:14:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:56.550 19:14:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:56.550 19:14:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:56.550 19:14:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:56.550 Cannot find device "nvmf_tgt_br" 00:13:56.550 19:14:33 -- nvmf/common.sh@154 -- # true 00:13:56.550 19:14:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:56.550 Cannot find device "nvmf_tgt_br2" 00:13:56.550 19:14:33 -- nvmf/common.sh@155 -- # true 00:13:56.550 19:14:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:56.550 19:14:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:56.550 Cannot find device "nvmf_tgt_br" 00:13:56.550 19:14:33 -- nvmf/common.sh@157 -- # true 00:13:56.550 19:14:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:56.550 Cannot find device "nvmf_tgt_br2" 00:13:56.550 19:14:33 -- nvmf/common.sh@158 -- # true 00:13:56.550 19:14:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:56.550 19:14:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:56.550 19:14:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:56.550 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:56.550 19:14:33 -- nvmf/common.sh@161 -- # true 00:13:56.550 19:14:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:56.550 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:56.550 19:14:33 -- nvmf/common.sh@162 -- # true 00:13:56.550 19:14:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:56.550 19:14:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:56.550 19:14:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:56.550 19:14:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:56.550 19:14:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:56.550 19:14:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:56.820 19:14:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:56.820 19:14:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:56.820 19:14:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:56.820 19:14:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:56.820 19:14:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:56.820 19:14:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:56.820 19:14:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:56.820 19:14:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:56.820 19:14:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:56.820 19:14:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:56.820 19:14:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:56.820 19:14:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:56.820 19:14:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:56.820 19:14:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:56.820 19:14:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:56.820 19:14:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:56.820 19:14:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:56.820 19:14:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:56.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:56.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:13:56.820 00:13:56.820 --- 10.0.0.2 ping statistics --- 00:13:56.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.820 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:13:56.820 19:14:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:56.820 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:56.820 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:13:56.820 00:13:56.820 --- 10.0.0.3 ping statistics --- 00:13:56.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.820 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:13:56.820 19:14:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:56.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:56.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:13:56.820 00:13:56.820 --- 10.0.0.1 ping statistics --- 00:13:56.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.820 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:13:56.820 19:14:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:56.820 19:14:34 -- nvmf/common.sh@421 -- # return 0 00:13:56.820 19:14:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:56.820 19:14:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:56.820 19:14:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:56.820 19:14:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:56.820 19:14:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:56.820 19:14:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:56.820 19:14:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:56.820 19:14:34 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:13:56.820 19:14:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:56.820 19:14:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:56.820 19:14:34 -- common/autotest_common.sh@10 -- # set +x 00:13:56.820 19:14:34 -- nvmf/common.sh@469 -- # nvmfpid=71654 00:13:56.820 19:14:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:56.820 19:14:34 -- nvmf/common.sh@470 -- # waitforlisten 71654 00:13:56.820 19:14:34 -- common/autotest_common.sh@817 -- # '[' -z 71654 ']' 00:13:56.820 19:14:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.820 19:14:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:56.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.820 19:14:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.820 19:14:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:56.820 19:14:34 -- common/autotest_common.sh@10 -- # set +x 00:13:56.820 [2024-02-14 19:14:34.169298] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:13:56.820 [2024-02-14 19:14:34.169651] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.093 [2024-02-14 19:14:34.304161] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.093 [2024-02-14 19:14:34.417350] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:57.093 [2024-02-14 19:14:34.417551] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.093 [2024-02-14 19:14:34.417566] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.093 [2024-02-14 19:14:34.417574] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.093 [2024-02-14 19:14:34.417610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.028 19:14:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:58.028 19:14:35 -- common/autotest_common.sh@850 -- # return 0 00:13:58.028 19:14:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:58.028 19:14:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:58.028 19:14:35 -- common/autotest_common.sh@10 -- # set +x 00:13:58.028 19:14:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.028 19:14:35 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:58.287 [2024-02-14 19:14:35.474006] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:58.287 19:14:35 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:13:58.287 19:14:35 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:13:58.287 19:14:35 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:58.287 19:14:35 -- common/autotest_common.sh@10 -- # set +x 00:13:58.287 ************************************ 00:13:58.287 START TEST lvs_grow_clean 00:13:58.287 ************************************ 00:13:58.287 19:14:35 -- common/autotest_common.sh@1102 -- # lvs_grow 00:13:58.287 19:14:35 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:58.287 19:14:35 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:58.287 19:14:35 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:58.287 19:14:35 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:58.287 19:14:35 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:58.287 19:14:35 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:58.287 19:14:35 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:58.287 19:14:35 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:58.287 19:14:35 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:58.546 19:14:35 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:58.546 19:14:35 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:58.804 19:14:36 -- target/nvmf_lvs_grow.sh@28 -- # lvs=a1eacce2-1fb4-4b04-bf71-34a36b7d4893 00:13:58.804 19:14:36 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1eacce2-1fb4-4b04-bf71-34a36b7d4893 00:13:58.804 19:14:36 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:59.062 19:14:36 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:59.062 19:14:36 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:59.062 19:14:36 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a1eacce2-1fb4-4b04-bf71-34a36b7d4893 lvol 150 00:13:59.320 19:14:36 -- target/nvmf_lvs_grow.sh@33 -- # lvol=a357ccc6-7bae-47d7-9147-59586d02158a 00:13:59.320 19:14:36 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:59.320 19:14:36 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:59.578 [2024-02-14 19:14:36.886617] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:59.578 [2024-02-14 19:14:36.886706] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:59.578 true 00:13:59.578 19:14:36 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1eacce2-1fb4-4b04-bf71-34a36b7d4893 00:13:59.578 19:14:36 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:59.836 19:14:37 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:59.836 19:14:37 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:00.094 19:14:37 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a357ccc6-7bae-47d7-9147-59586d02158a 00:14:00.352 19:14:37 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:00.609 [2024-02-14 19:14:37.883384] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:00.610 19:14:37 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:00.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:00.867 19:14:38 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=71816 00:14:00.867 19:14:38 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:00.868 19:14:38 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:00.868 19:14:38 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 71816 /var/tmp/bdevperf.sock 00:14:00.868 19:14:38 -- common/autotest_common.sh@817 -- # '[' -z 71816 ']' 00:14:00.868 19:14:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:00.868 19:14:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:00.868 19:14:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:00.868 19:14:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:00.868 19:14:38 -- common/autotest_common.sh@10 -- # set +x 00:14:00.868 [2024-02-14 19:14:38.188385] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:14:00.868 [2024-02-14 19:14:38.188526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71816 ] 00:14:01.126 [2024-02-14 19:14:38.329062] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.126 [2024-02-14 19:14:38.491775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.060 19:14:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:02.060 19:14:39 -- common/autotest_common.sh@850 -- # return 0 00:14:02.060 19:14:39 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:02.060 Nvme0n1 00:14:02.060 19:14:39 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:02.319 [ 00:14:02.319 { 00:14:02.319 "aliases": [ 00:14:02.319 "a357ccc6-7bae-47d7-9147-59586d02158a" 00:14:02.319 ], 00:14:02.319 "assigned_rate_limits": { 00:14:02.319 "r_mbytes_per_sec": 0, 00:14:02.319 "rw_ios_per_sec": 0, 00:14:02.319 "rw_mbytes_per_sec": 0, 00:14:02.319 "w_mbytes_per_sec": 0 00:14:02.319 }, 00:14:02.319 "block_size": 4096, 00:14:02.319 "claimed": false, 00:14:02.319 "driver_specific": { 00:14:02.319 "mp_policy": "active_passive", 00:14:02.319 "nvme": [ 00:14:02.319 { 00:14:02.319 "ctrlr_data": { 00:14:02.319 "ana_reporting": false, 00:14:02.319 "cntlid": 1, 00:14:02.319 "firmware_revision": "24.05", 00:14:02.319 "model_number": "SPDK bdev Controller", 00:14:02.319 "multi_ctrlr": true, 00:14:02.319 "oacs": { 00:14:02.319 "firmware": 0, 00:14:02.319 "format": 0, 00:14:02.319 "ns_manage": 0, 00:14:02.319 "security": 0 00:14:02.319 }, 00:14:02.319 "serial_number": "SPDK0", 00:14:02.319 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:02.319 "vendor_id": "0x8086" 00:14:02.319 }, 00:14:02.319 "ns_data": { 00:14:02.319 "can_share": true, 00:14:02.319 "id": 1 00:14:02.319 }, 00:14:02.319 "trid": { 00:14:02.319 "adrfam": "IPv4", 00:14:02.319 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:02.319 "traddr": "10.0.0.2", 00:14:02.319 "trsvcid": "4420", 00:14:02.319 "trtype": "TCP" 00:14:02.319 }, 00:14:02.319 "vs": { 00:14:02.319 "nvme_version": "1.3" 00:14:02.319 } 00:14:02.319 } 00:14:02.319 ] 00:14:02.319 }, 00:14:02.319 "name": "Nvme0n1", 00:14:02.319 "num_blocks": 38912, 00:14:02.319 "product_name": "NVMe disk", 00:14:02.319 "supported_io_types": { 00:14:02.319 "abort": true, 00:14:02.319 "compare": true, 00:14:02.319 "compare_and_write": true, 00:14:02.319 "flush": true, 00:14:02.319 "nvme_admin": true, 00:14:02.319 "nvme_io": true, 00:14:02.319 "read": true, 00:14:02.319 "reset": true, 00:14:02.319 "unmap": true, 00:14:02.319 "write": true, 00:14:02.319 "write_zeroes": true 00:14:02.319 }, 00:14:02.319 "uuid": "a357ccc6-7bae-47d7-9147-59586d02158a", 00:14:02.319 "zoned": false 00:14:02.319 } 00:14:02.319 ] 00:14:02.319 19:14:39 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:02.319 19:14:39 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=71863 00:14:02.319 19:14:39 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:02.577 Running I/O for 10 seconds... 00:14:03.512 Latency(us) 00:14:03.512 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.512 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:03.512 Nvme0n1 : 1.00 8236.00 32.17 0.00 0.00 0.00 0.00 0.00 00:14:03.512 =================================================================================================================== 00:14:03.512 Total : 8236.00 32.17 0.00 0.00 0.00 0.00 0.00 00:14:03.512 00:14:04.446 19:14:41 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a1eacce2-1fb4-4b04-bf71-34a36b7d4893 00:14:04.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:04.446 Nvme0n1 : 2.00 8265.50 32.29 0.00 0.00 0.00 0.00 0.00 00:14:04.446 =================================================================================================================== 00:14:04.446 Total : 8265.50 32.29 0.00 0.00 0.00 0.00 0.00 00:14:04.446 00:14:04.703 true 00:14:04.703 19:14:42 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1eacce2-1fb4-4b04-bf71-34a36b7d4893 00:14:04.703 19:14:42 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:04.962 19:14:42 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:04.962 19:14:42 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:04.962 19:14:42 -- target/nvmf_lvs_grow.sh@65 -- # wait 71863 00:14:05.527 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:05.527 Nvme0n1 : 3.00 8376.00 32.72 0.00 0.00 0.00 0.00 0.00 00:14:05.527 =================================================================================================================== 00:14:05.527 Total : 8376.00 32.72 0.00 0.00 0.00 0.00 0.00 00:14:05.527 00:14:06.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:06.480 Nvme0n1 : 4.00 8366.75 32.68 0.00 0.00 0.00 0.00 0.00 00:14:06.480 =================================================================================================================== 00:14:06.480 Total : 8366.75 32.68 0.00 0.00 0.00 0.00 0.00 00:14:06.480 00:14:07.414 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:07.414 Nvme0n1 : 5.00 8340.60 32.58 0.00 0.00 0.00 0.00 0.00 00:14:07.414 =================================================================================================================== 00:14:07.414 Total : 8340.60 32.58 0.00 0.00 0.00 0.00 0.00 00:14:07.414 00:14:08.786 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:08.786 Nvme0n1 : 6.00 8343.67 32.59 0.00 0.00 0.00 0.00 0.00 00:14:08.786 =================================================================================================================== 00:14:08.786 Total : 8343.67 32.59 0.00 0.00 0.00 0.00 0.00 00:14:08.786 00:14:09.718 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:09.718 Nvme0n1 : 7.00 8294.43 32.40 0.00 0.00 0.00 0.00 0.00 00:14:09.718 =================================================================================================================== 00:14:09.718 Total : 8294.43 32.40 0.00 0.00 0.00 0.00 0.00 00:14:09.718 00:14:10.651 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:10.651 Nvme0n1 : 8.00 8238.38 32.18 0.00 0.00 0.00 0.00 0.00 00:14:10.651 =================================================================================================================== 00:14:10.651 Total : 8238.38 32.18 0.00 0.00 0.00 0.00 0.00 00:14:10.651 00:14:11.585 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:11.585 Nvme0n1 : 9.00 8214.89 32.09 0.00 0.00 0.00 0.00 0.00 00:14:11.585 =================================================================================================================== 00:14:11.585 Total : 8214.89 32.09 0.00 0.00 0.00 0.00 0.00 00:14:11.585 00:14:12.521 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:12.521 Nvme0n1 : 10.00 8146.00 31.82 0.00 0.00 0.00 0.00 0.00 00:14:12.521 =================================================================================================================== 00:14:12.521 Total : 8146.00 31.82 0.00 0.00 0.00 0.00 0.00 00:14:12.521 00:14:12.521 00:14:12.521 Latency(us) 00:14:12.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.521 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:12.521 Nvme0n1 : 10.01 8147.93 31.83 0.00 0.00 15704.52 6702.55 35508.60 00:14:12.521 =================================================================================================================== 00:14:12.521 Total : 8147.93 31.83 0.00 0.00 15704.52 6702.55 35508.60 00:14:12.521 0 00:14:12.521 19:14:49 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 71816 00:14:12.521 19:14:49 -- common/autotest_common.sh@924 -- # '[' -z 71816 ']' 00:14:12.521 19:14:49 -- common/autotest_common.sh@928 -- # kill -0 71816 00:14:12.521 19:14:49 -- common/autotest_common.sh@929 -- # uname 00:14:12.521 19:14:49 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:14:12.521 19:14:49 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 71816 00:14:12.521 19:14:49 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:14:12.521 killing process with pid 71816 00:14:12.521 19:14:49 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:14:12.521 19:14:49 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 71816' 00:14:12.521 19:14:49 -- common/autotest_common.sh@943 -- # kill 71816 00:14:12.521 Received shutdown signal, test time was about 10.000000 seconds 00:14:12.521 00:14:12.521 Latency(us) 00:14:12.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.521 =================================================================================================================== 00:14:12.521 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:12.521 19:14:49 -- common/autotest_common.sh@948 -- # wait 71816 00:14:13.087 19:14:50 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:13.346 19:14:50 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1eacce2-1fb4-4b04-bf71-34a36b7d4893 00:14:13.346 19:14:50 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:13.604 19:14:50 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:13.604 19:14:50 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:13.604 19:14:50 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:13.604 [2024-02-14 19:14:51.005137] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:13.863 19:14:51 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1eacce2-1fb4-4b04-bf71-34a36b7d4893 00:14:13.863 19:14:51 -- common/autotest_common.sh@638 -- # local es=0 00:14:13.863 19:14:51 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1eacce2-1fb4-4b04-bf71-34a36b7d4893 00:14:13.863 19:14:51 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:13.863 19:14:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:13.863 19:14:51 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:13.863 19:14:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:13.863 19:14:51 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:13.863 19:14:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:13.863 19:14:51 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:13.863 19:14:51 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:13.863 19:14:51 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1eacce2-1fb4-4b04-bf71-34a36b7d4893 00:14:14.121 2024/02/14 19:14:51 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:a1eacce2-1fb4-4b04-bf71-34a36b7d4893], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:14.121 request: 00:14:14.121 { 00:14:14.121 "method": "bdev_lvol_get_lvstores", 00:14:14.121 "params": { 00:14:14.121 "uuid": "a1eacce2-1fb4-4b04-bf71-34a36b7d4893" 00:14:14.121 } 00:14:14.121 } 00:14:14.121 Got JSON-RPC error response 00:14:14.121 GoRPCClient: error on JSON-RPC call 00:14:14.121 19:14:51 -- common/autotest_common.sh@641 -- # es=1 00:14:14.121 19:14:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:14.121 19:14:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:14.121 19:14:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:14.121 19:14:51 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:14.379 aio_bdev 00:14:14.379 19:14:51 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev a357ccc6-7bae-47d7-9147-59586d02158a 00:14:14.379 19:14:51 -- common/autotest_common.sh@885 -- # local bdev_name=a357ccc6-7bae-47d7-9147-59586d02158a 00:14:14.379 19:14:51 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:14.379 19:14:51 -- common/autotest_common.sh@887 -- # local i 00:14:14.379 19:14:51 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:14.379 19:14:51 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:14.379 19:14:51 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:14.637 19:14:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a357ccc6-7bae-47d7-9147-59586d02158a -t 2000 00:14:14.637 [ 00:14:14.637 { 00:14:14.637 "aliases": [ 00:14:14.637 "lvs/lvol" 00:14:14.637 ], 00:14:14.637 "assigned_rate_limits": { 00:14:14.637 "r_mbytes_per_sec": 0, 00:14:14.637 "rw_ios_per_sec": 0, 00:14:14.637 "rw_mbytes_per_sec": 0, 00:14:14.637 "w_mbytes_per_sec": 0 00:14:14.637 }, 00:14:14.637 "block_size": 4096, 00:14:14.637 "claimed": false, 00:14:14.637 "driver_specific": { 00:14:14.637 "lvol": { 00:14:14.637 "base_bdev": "aio_bdev", 00:14:14.637 "clone": false, 00:14:14.637 "esnap_clone": false, 00:14:14.637 "lvol_store_uuid": "a1eacce2-1fb4-4b04-bf71-34a36b7d4893", 00:14:14.637 "snapshot": false, 00:14:14.637 "thin_provision": false 00:14:14.637 } 00:14:14.637 }, 00:14:14.638 "name": "a357ccc6-7bae-47d7-9147-59586d02158a", 00:14:14.638 "num_blocks": 38912, 00:14:14.638 "product_name": "Logical Volume", 00:14:14.638 "supported_io_types": { 00:14:14.638 "abort": false, 00:14:14.638 "compare": false, 00:14:14.638 "compare_and_write": false, 00:14:14.638 "flush": false, 00:14:14.638 "nvme_admin": false, 00:14:14.638 "nvme_io": false, 00:14:14.638 "read": true, 00:14:14.638 "reset": true, 00:14:14.638 "unmap": true, 00:14:14.638 "write": true, 00:14:14.638 "write_zeroes": true 00:14:14.638 }, 00:14:14.638 "uuid": "a357ccc6-7bae-47d7-9147-59586d02158a", 00:14:14.638 "zoned": false 00:14:14.638 } 00:14:14.638 ] 00:14:14.908 19:14:52 -- common/autotest_common.sh@893 -- # return 0 00:14:14.908 19:14:52 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1eacce2-1fb4-4b04-bf71-34a36b7d4893 00:14:14.908 19:14:52 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:14.908 19:14:52 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:14.908 19:14:52 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1eacce2-1fb4-4b04-bf71-34a36b7d4893 00:14:14.908 19:14:52 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:15.179 19:14:52 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:15.179 19:14:52 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a357ccc6-7bae-47d7-9147-59586d02158a 00:14:15.437 19:14:52 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a1eacce2-1fb4-4b04-bf71-34a36b7d4893 00:14:15.696 19:14:53 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:15.959 19:14:53 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:16.526 ************************************ 00:14:16.526 END TEST lvs_grow_clean 00:14:16.527 ************************************ 00:14:16.527 00:14:16.527 real 0m18.172s 00:14:16.527 user 0m17.279s 00:14:16.527 sys 0m2.288s 00:14:16.527 19:14:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:16.527 19:14:53 -- common/autotest_common.sh@10 -- # set +x 00:14:16.527 19:14:53 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:16.527 19:14:53 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:14:16.527 19:14:53 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:16.527 19:14:53 -- common/autotest_common.sh@10 -- # set +x 00:14:16.527 ************************************ 00:14:16.527 START TEST lvs_grow_dirty 00:14:16.527 ************************************ 00:14:16.527 19:14:53 -- common/autotest_common.sh@1102 -- # lvs_grow dirty 00:14:16.527 19:14:53 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:16.527 19:14:53 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:16.527 19:14:53 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:16.527 19:14:53 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:16.527 19:14:53 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:16.527 19:14:53 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:16.527 19:14:53 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:16.527 19:14:53 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:16.527 19:14:53 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:16.785 19:14:54 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:16.785 19:14:54 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:17.044 19:14:54 -- target/nvmf_lvs_grow.sh@28 -- # lvs=fb9c9c23-ab36-4aba-8140-a1d71c2b216f 00:14:17.044 19:14:54 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:17.044 19:14:54 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb9c9c23-ab36-4aba-8140-a1d71c2b216f 00:14:17.302 19:14:54 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:17.302 19:14:54 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:17.302 19:14:54 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fb9c9c23-ab36-4aba-8140-a1d71c2b216f lvol 150 00:14:17.561 19:14:54 -- target/nvmf_lvs_grow.sh@33 -- # lvol=77214db2-ccf7-430e-bc07-dcc1283d3d7a 00:14:17.561 19:14:54 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:17.561 19:14:54 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:17.820 [2024-02-14 19:14:55.045464] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:17.820 [2024-02-14 19:14:55.045577] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:17.820 true 00:14:17.820 19:14:55 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb9c9c23-ab36-4aba-8140-a1d71c2b216f 00:14:17.820 19:14:55 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:18.078 19:14:55 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:18.078 19:14:55 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:18.336 19:14:55 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 77214db2-ccf7-430e-bc07-dcc1283d3d7a 00:14:18.594 19:14:55 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:18.853 19:14:56 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:19.112 19:14:56 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=72254 00:14:19.112 19:14:56 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:19.112 19:14:56 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:19.112 19:14:56 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 72254 /var/tmp/bdevperf.sock 00:14:19.112 19:14:56 -- common/autotest_common.sh@817 -- # '[' -z 72254 ']' 00:14:19.112 19:14:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:19.112 19:14:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:19.112 19:14:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:19.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:19.112 19:14:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:19.112 19:14:56 -- common/autotest_common.sh@10 -- # set +x 00:14:19.112 [2024-02-14 19:14:56.467613] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:14:19.112 [2024-02-14 19:14:56.467714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72254 ] 00:14:19.371 [2024-02-14 19:14:56.600869] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.371 [2024-02-14 19:14:56.758363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.306 19:14:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:20.306 19:14:57 -- common/autotest_common.sh@850 -- # return 0 00:14:20.306 19:14:57 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:20.565 Nvme0n1 00:14:20.565 19:14:57 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:20.824 [ 00:14:20.824 { 00:14:20.824 "aliases": [ 00:14:20.824 "77214db2-ccf7-430e-bc07-dcc1283d3d7a" 00:14:20.824 ], 00:14:20.824 "assigned_rate_limits": { 00:14:20.824 "r_mbytes_per_sec": 0, 00:14:20.824 "rw_ios_per_sec": 0, 00:14:20.824 "rw_mbytes_per_sec": 0, 00:14:20.824 "w_mbytes_per_sec": 0 00:14:20.824 }, 00:14:20.824 "block_size": 4096, 00:14:20.824 "claimed": false, 00:14:20.824 "driver_specific": { 00:14:20.824 "mp_policy": "active_passive", 00:14:20.824 "nvme": [ 00:14:20.824 { 00:14:20.824 "ctrlr_data": { 00:14:20.824 "ana_reporting": false, 00:14:20.824 "cntlid": 1, 00:14:20.824 "firmware_revision": "24.05", 00:14:20.824 "model_number": "SPDK bdev Controller", 00:14:20.824 "multi_ctrlr": true, 00:14:20.824 "oacs": { 00:14:20.824 "firmware": 0, 00:14:20.824 "format": 0, 00:14:20.824 "ns_manage": 0, 00:14:20.824 "security": 0 00:14:20.824 }, 00:14:20.824 "serial_number": "SPDK0", 00:14:20.824 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:20.824 "vendor_id": "0x8086" 00:14:20.824 }, 00:14:20.824 "ns_data": { 00:14:20.824 "can_share": true, 00:14:20.824 "id": 1 00:14:20.824 }, 00:14:20.824 "trid": { 00:14:20.824 "adrfam": "IPv4", 00:14:20.824 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:20.824 "traddr": "10.0.0.2", 00:14:20.824 "trsvcid": "4420", 00:14:20.824 "trtype": "TCP" 00:14:20.824 }, 00:14:20.824 "vs": { 00:14:20.824 "nvme_version": "1.3" 00:14:20.824 } 00:14:20.824 } 00:14:20.824 ] 00:14:20.824 }, 00:14:20.824 "name": "Nvme0n1", 00:14:20.824 "num_blocks": 38912, 00:14:20.824 "product_name": "NVMe disk", 00:14:20.824 "supported_io_types": { 00:14:20.824 "abort": true, 00:14:20.824 "compare": true, 00:14:20.824 "compare_and_write": true, 00:14:20.824 "flush": true, 00:14:20.824 "nvme_admin": true, 00:14:20.824 "nvme_io": true, 00:14:20.824 "read": true, 00:14:20.824 "reset": true, 00:14:20.824 "unmap": true, 00:14:20.824 "write": true, 00:14:20.824 "write_zeroes": true 00:14:20.824 }, 00:14:20.824 "uuid": "77214db2-ccf7-430e-bc07-dcc1283d3d7a", 00:14:20.824 "zoned": false 00:14:20.824 } 00:14:20.824 ] 00:14:20.824 19:14:58 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72301 00:14:20.824 19:14:58 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:20.824 19:14:58 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:20.824 Running I/O for 10 seconds... 00:14:21.761 Latency(us) 00:14:21.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:21.761 Nvme0n1 : 1.00 8663.00 33.84 0.00 0.00 0.00 0.00 0.00 00:14:21.761 =================================================================================================================== 00:14:21.761 Total : 8663.00 33.84 0.00 0.00 0.00 0.00 0.00 00:14:21.761 00:14:22.697 19:15:00 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fb9c9c23-ab36-4aba-8140-a1d71c2b216f 00:14:22.956 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:22.956 Nvme0n1 : 2.00 8541.50 33.37 0.00 0.00 0.00 0.00 0.00 00:14:22.956 =================================================================================================================== 00:14:22.956 Total : 8541.50 33.37 0.00 0.00 0.00 0.00 0.00 00:14:22.956 00:14:22.956 true 00:14:22.956 19:15:00 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:22.956 19:15:00 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb9c9c23-ab36-4aba-8140-a1d71c2b216f 00:14:23.522 19:15:00 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:23.522 19:15:00 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:23.522 19:15:00 -- target/nvmf_lvs_grow.sh@65 -- # wait 72301 00:14:23.780 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:23.780 Nvme0n1 : 3.00 8617.00 33.66 0.00 0.00 0.00 0.00 0.00 00:14:23.780 =================================================================================================================== 00:14:23.780 Total : 8617.00 33.66 0.00 0.00 0.00 0.00 0.00 00:14:23.780 00:14:25.174 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:25.174 Nvme0n1 : 4.00 8582.00 33.52 0.00 0.00 0.00 0.00 0.00 00:14:25.174 =================================================================================================================== 00:14:25.174 Total : 8582.00 33.52 0.00 0.00 0.00 0.00 0.00 00:14:25.174 00:14:25.751 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:25.751 Nvme0n1 : 5.00 8529.80 33.32 0.00 0.00 0.00 0.00 0.00 00:14:25.751 =================================================================================================================== 00:14:25.751 Total : 8529.80 33.32 0.00 0.00 0.00 0.00 0.00 00:14:25.751 00:14:27.126 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:27.126 Nvme0n1 : 6.00 8507.50 33.23 0.00 0.00 0.00 0.00 0.00 00:14:27.126 =================================================================================================================== 00:14:27.126 Total : 8507.50 33.23 0.00 0.00 0.00 0.00 0.00 00:14:27.126 00:14:28.062 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:28.062 Nvme0n1 : 7.00 8253.57 32.24 0.00 0.00 0.00 0.00 0.00 00:14:28.062 =================================================================================================================== 00:14:28.062 Total : 8253.57 32.24 0.00 0.00 0.00 0.00 0.00 00:14:28.062 00:14:28.998 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:28.998 Nvme0n1 : 8.00 8188.50 31.99 0.00 0.00 0.00 0.00 0.00 00:14:28.998 =================================================================================================================== 00:14:28.998 Total : 8188.50 31.99 0.00 0.00 0.00 0.00 0.00 00:14:28.998 00:14:29.934 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:29.934 Nvme0n1 : 9.00 8155.67 31.86 0.00 0.00 0.00 0.00 0.00 00:14:29.934 =================================================================================================================== 00:14:29.934 Total : 8155.67 31.86 0.00 0.00 0.00 0.00 0.00 00:14:29.934 00:14:30.871 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:30.871 Nvme0n1 : 10.00 8124.40 31.74 0.00 0.00 0.00 0.00 0.00 00:14:30.871 =================================================================================================================== 00:14:30.871 Total : 8124.40 31.74 0.00 0.00 0.00 0.00 0.00 00:14:30.871 00:14:30.871 00:14:30.871 Latency(us) 00:14:30.871 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.871 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:30.871 Nvme0n1 : 10.02 8124.78 31.74 0.00 0.00 15749.81 7208.96 155379.90 00:14:30.871 =================================================================================================================== 00:14:30.871 Total : 8124.78 31.74 0.00 0.00 15749.81 7208.96 155379.90 00:14:30.871 0 00:14:30.871 19:15:08 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 72254 00:14:30.871 19:15:08 -- common/autotest_common.sh@924 -- # '[' -z 72254 ']' 00:14:30.871 19:15:08 -- common/autotest_common.sh@928 -- # kill -0 72254 00:14:30.871 19:15:08 -- common/autotest_common.sh@929 -- # uname 00:14:30.871 19:15:08 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:14:30.871 19:15:08 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 72254 00:14:30.871 killing process with pid 72254 00:14:30.871 Received shutdown signal, test time was about 10.000000 seconds 00:14:30.871 00:14:30.871 Latency(us) 00:14:30.871 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.871 =================================================================================================================== 00:14:30.871 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:30.871 19:15:08 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:14:30.871 19:15:08 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:14:30.871 19:15:08 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 72254' 00:14:30.871 19:15:08 -- common/autotest_common.sh@943 -- # kill 72254 00:14:30.871 19:15:08 -- common/autotest_common.sh@948 -- # wait 72254 00:14:31.439 19:15:08 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:31.439 19:15:08 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb9c9c23-ab36-4aba-8140-a1d71c2b216f 00:14:31.439 19:15:08 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:32.008 19:15:09 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:32.008 19:15:09 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:14:32.008 19:15:09 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 71654 00:14:32.008 19:15:09 -- target/nvmf_lvs_grow.sh@74 -- # wait 71654 00:14:32.008 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 71654 Killed "${NVMF_APP[@]}" "$@" 00:14:32.008 19:15:09 -- target/nvmf_lvs_grow.sh@74 -- # true 00:14:32.008 19:15:09 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:14:32.008 19:15:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:32.008 19:15:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:32.008 19:15:09 -- common/autotest_common.sh@10 -- # set +x 00:14:32.008 19:15:09 -- nvmf/common.sh@469 -- # nvmfpid=72452 00:14:32.008 19:15:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:32.008 19:15:09 -- nvmf/common.sh@470 -- # waitforlisten 72452 00:14:32.008 19:15:09 -- common/autotest_common.sh@817 -- # '[' -z 72452 ']' 00:14:32.008 19:15:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.008 19:15:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:32.008 19:15:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.008 19:15:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:32.008 19:15:09 -- common/autotest_common.sh@10 -- # set +x 00:14:32.008 [2024-02-14 19:15:09.225471] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:14:32.008 [2024-02-14 19:15:09.225604] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.008 [2024-02-14 19:15:09.367607] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.267 [2024-02-14 19:15:09.481942] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:32.267 [2024-02-14 19:15:09.482092] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.267 [2024-02-14 19:15:09.482105] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.267 [2024-02-14 19:15:09.482114] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.267 [2024-02-14 19:15:09.482138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.834 19:15:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:32.834 19:15:10 -- common/autotest_common.sh@850 -- # return 0 00:14:32.834 19:15:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:32.834 19:15:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:32.834 19:15:10 -- common/autotest_common.sh@10 -- # set +x 00:14:32.835 19:15:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.835 19:15:10 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:33.093 [2024-02-14 19:15:10.477929] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:33.093 [2024-02-14 19:15:10.478283] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:33.093 [2024-02-14 19:15:10.478443] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:33.352 19:15:10 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:14:33.352 19:15:10 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 77214db2-ccf7-430e-bc07-dcc1283d3d7a 00:14:33.352 19:15:10 -- common/autotest_common.sh@885 -- # local bdev_name=77214db2-ccf7-430e-bc07-dcc1283d3d7a 00:14:33.352 19:15:10 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:33.352 19:15:10 -- common/autotest_common.sh@887 -- # local i 00:14:33.352 19:15:10 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:33.352 19:15:10 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:33.352 19:15:10 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:33.610 19:15:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 77214db2-ccf7-430e-bc07-dcc1283d3d7a -t 2000 00:14:33.869 [ 00:14:33.869 { 00:14:33.869 "aliases": [ 00:14:33.869 "lvs/lvol" 00:14:33.869 ], 00:14:33.869 "assigned_rate_limits": { 00:14:33.869 "r_mbytes_per_sec": 0, 00:14:33.869 "rw_ios_per_sec": 0, 00:14:33.869 "rw_mbytes_per_sec": 0, 00:14:33.869 "w_mbytes_per_sec": 0 00:14:33.869 }, 00:14:33.869 "block_size": 4096, 00:14:33.869 "claimed": false, 00:14:33.869 "driver_specific": { 00:14:33.869 "lvol": { 00:14:33.869 "base_bdev": "aio_bdev", 00:14:33.869 "clone": false, 00:14:33.869 "esnap_clone": false, 00:14:33.869 "lvol_store_uuid": "fb9c9c23-ab36-4aba-8140-a1d71c2b216f", 00:14:33.869 "snapshot": false, 00:14:33.869 "thin_provision": false 00:14:33.869 } 00:14:33.869 }, 00:14:33.869 "name": "77214db2-ccf7-430e-bc07-dcc1283d3d7a", 00:14:33.869 "num_blocks": 38912, 00:14:33.869 "product_name": "Logical Volume", 00:14:33.869 "supported_io_types": { 00:14:33.869 "abort": false, 00:14:33.869 "compare": false, 00:14:33.869 "compare_and_write": false, 00:14:33.869 "flush": false, 00:14:33.869 "nvme_admin": false, 00:14:33.869 "nvme_io": false, 00:14:33.869 "read": true, 00:14:33.869 "reset": true, 00:14:33.869 "unmap": true, 00:14:33.869 "write": true, 00:14:33.869 "write_zeroes": true 00:14:33.869 }, 00:14:33.869 "uuid": "77214db2-ccf7-430e-bc07-dcc1283d3d7a", 00:14:33.869 "zoned": false 00:14:33.869 } 00:14:33.869 ] 00:14:33.869 19:15:11 -- common/autotest_common.sh@893 -- # return 0 00:14:33.869 19:15:11 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb9c9c23-ab36-4aba-8140-a1d71c2b216f 00:14:33.869 19:15:11 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:14:34.128 19:15:11 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:14:34.128 19:15:11 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb9c9c23-ab36-4aba-8140-a1d71c2b216f 00:14:34.128 19:15:11 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:14:34.386 19:15:11 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:14:34.386 19:15:11 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:34.645 [2024-02-14 19:15:11.810625] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:34.645 19:15:11 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb9c9c23-ab36-4aba-8140-a1d71c2b216f 00:14:34.645 19:15:11 -- common/autotest_common.sh@638 -- # local es=0 00:14:34.645 19:15:11 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb9c9c23-ab36-4aba-8140-a1d71c2b216f 00:14:34.645 19:15:11 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:34.645 19:15:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:34.645 19:15:11 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:34.645 19:15:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:34.645 19:15:11 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:34.645 19:15:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:34.645 19:15:11 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:34.645 19:15:11 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:34.645 19:15:11 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb9c9c23-ab36-4aba-8140-a1d71c2b216f 00:14:34.903 2024/02/14 19:15:12 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:fb9c9c23-ab36-4aba-8140-a1d71c2b216f], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:34.903 request: 00:14:34.903 { 00:14:34.903 "method": "bdev_lvol_get_lvstores", 00:14:34.903 "params": { 00:14:34.903 "uuid": "fb9c9c23-ab36-4aba-8140-a1d71c2b216f" 00:14:34.903 } 00:14:34.903 } 00:14:34.903 Got JSON-RPC error response 00:14:34.903 GoRPCClient: error on JSON-RPC call 00:14:34.903 19:15:12 -- common/autotest_common.sh@641 -- # es=1 00:14:34.903 19:15:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:34.903 19:15:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:34.903 19:15:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:34.903 19:15:12 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:35.162 aio_bdev 00:14:35.162 19:15:12 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 77214db2-ccf7-430e-bc07-dcc1283d3d7a 00:14:35.162 19:15:12 -- common/autotest_common.sh@885 -- # local bdev_name=77214db2-ccf7-430e-bc07-dcc1283d3d7a 00:14:35.162 19:15:12 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:35.162 19:15:12 -- common/autotest_common.sh@887 -- # local i 00:14:35.162 19:15:12 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:35.162 19:15:12 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:35.162 19:15:12 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:35.421 19:15:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 77214db2-ccf7-430e-bc07-dcc1283d3d7a -t 2000 00:14:35.421 [ 00:14:35.421 { 00:14:35.421 "aliases": [ 00:14:35.421 "lvs/lvol" 00:14:35.421 ], 00:14:35.421 "assigned_rate_limits": { 00:14:35.421 "r_mbytes_per_sec": 0, 00:14:35.421 "rw_ios_per_sec": 0, 00:14:35.421 "rw_mbytes_per_sec": 0, 00:14:35.421 "w_mbytes_per_sec": 0 00:14:35.421 }, 00:14:35.421 "block_size": 4096, 00:14:35.421 "claimed": false, 00:14:35.421 "driver_specific": { 00:14:35.421 "lvol": { 00:14:35.421 "base_bdev": "aio_bdev", 00:14:35.421 "clone": false, 00:14:35.421 "esnap_clone": false, 00:14:35.421 "lvol_store_uuid": "fb9c9c23-ab36-4aba-8140-a1d71c2b216f", 00:14:35.421 "snapshot": false, 00:14:35.421 "thin_provision": false 00:14:35.421 } 00:14:35.421 }, 00:14:35.421 "name": "77214db2-ccf7-430e-bc07-dcc1283d3d7a", 00:14:35.421 "num_blocks": 38912, 00:14:35.421 "product_name": "Logical Volume", 00:14:35.421 "supported_io_types": { 00:14:35.421 "abort": false, 00:14:35.421 "compare": false, 00:14:35.421 "compare_and_write": false, 00:14:35.421 "flush": false, 00:14:35.421 "nvme_admin": false, 00:14:35.421 "nvme_io": false, 00:14:35.421 "read": true, 00:14:35.421 "reset": true, 00:14:35.421 "unmap": true, 00:14:35.421 "write": true, 00:14:35.421 "write_zeroes": true 00:14:35.421 }, 00:14:35.421 "uuid": "77214db2-ccf7-430e-bc07-dcc1283d3d7a", 00:14:35.421 "zoned": false 00:14:35.421 } 00:14:35.421 ] 00:14:35.680 19:15:12 -- common/autotest_common.sh@893 -- # return 0 00:14:35.680 19:15:12 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb9c9c23-ab36-4aba-8140-a1d71c2b216f 00:14:35.680 19:15:12 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:35.939 19:15:13 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:35.939 19:15:13 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb9c9c23-ab36-4aba-8140-a1d71c2b216f 00:14:35.939 19:15:13 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:36.197 19:15:13 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:36.197 19:15:13 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 77214db2-ccf7-430e-bc07-dcc1283d3d7a 00:14:36.455 19:15:13 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fb9c9c23-ab36-4aba-8140-a1d71c2b216f 00:14:36.455 19:15:13 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:37.022 19:15:14 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:37.282 ************************************ 00:14:37.282 END TEST lvs_grow_dirty 00:14:37.282 ************************************ 00:14:37.282 00:14:37.282 real 0m20.790s 00:14:37.282 user 0m42.663s 00:14:37.282 sys 0m8.352s 00:14:37.282 19:15:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:37.282 19:15:14 -- common/autotest_common.sh@10 -- # set +x 00:14:37.282 19:15:14 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:37.282 19:15:14 -- common/autotest_common.sh@794 -- # type=--id 00:14:37.282 19:15:14 -- common/autotest_common.sh@795 -- # id=0 00:14:37.282 19:15:14 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:14:37.282 19:15:14 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:37.282 19:15:14 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:14:37.282 19:15:14 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:14:37.282 19:15:14 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:14:37.282 19:15:14 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:37.282 nvmf_trace.0 00:14:37.282 19:15:14 -- common/autotest_common.sh@809 -- # return 0 00:14:37.282 19:15:14 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:37.282 19:15:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:37.282 19:15:14 -- nvmf/common.sh@116 -- # sync 00:14:37.541 19:15:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:37.541 19:15:14 -- nvmf/common.sh@119 -- # set +e 00:14:37.541 19:15:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:37.541 19:15:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:37.541 rmmod nvme_tcp 00:14:37.541 rmmod nvme_fabrics 00:14:37.541 rmmod nvme_keyring 00:14:37.541 19:15:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:37.541 19:15:14 -- nvmf/common.sh@123 -- # set -e 00:14:37.541 19:15:14 -- nvmf/common.sh@124 -- # return 0 00:14:37.541 19:15:14 -- nvmf/common.sh@477 -- # '[' -n 72452 ']' 00:14:37.541 19:15:14 -- nvmf/common.sh@478 -- # killprocess 72452 00:14:37.541 19:15:14 -- common/autotest_common.sh@924 -- # '[' -z 72452 ']' 00:14:37.541 19:15:14 -- common/autotest_common.sh@928 -- # kill -0 72452 00:14:37.541 19:15:14 -- common/autotest_common.sh@929 -- # uname 00:14:37.541 19:15:14 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:14:37.541 19:15:14 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 72452 00:14:37.541 19:15:14 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:14:37.541 19:15:14 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:14:37.541 19:15:14 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 72452' 00:14:37.541 killing process with pid 72452 00:14:37.541 19:15:14 -- common/autotest_common.sh@943 -- # kill 72452 00:14:37.541 19:15:14 -- common/autotest_common.sh@948 -- # wait 72452 00:14:37.801 19:15:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:37.801 19:15:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:37.801 19:15:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:37.801 19:15:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:37.801 19:15:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:37.801 19:15:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.801 19:15:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:37.801 19:15:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.801 19:15:15 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:37.801 00:14:37.801 real 0m41.553s 00:14:37.801 user 1m6.519s 00:14:37.801 sys 0m11.395s 00:14:37.801 19:15:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:37.801 19:15:15 -- common/autotest_common.sh@10 -- # set +x 00:14:37.801 ************************************ 00:14:37.801 END TEST nvmf_lvs_grow 00:14:37.801 ************************************ 00:14:38.060 19:15:15 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:38.060 19:15:15 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:14:38.060 19:15:15 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:38.060 19:15:15 -- common/autotest_common.sh@10 -- # set +x 00:14:38.060 ************************************ 00:14:38.060 START TEST nvmf_bdev_io_wait 00:14:38.060 ************************************ 00:14:38.060 19:15:15 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:38.060 * Looking for test storage... 00:14:38.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:38.060 19:15:15 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:38.060 19:15:15 -- nvmf/common.sh@7 -- # uname -s 00:14:38.060 19:15:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.060 19:15:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.060 19:15:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.060 19:15:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.060 19:15:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.060 19:15:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.060 19:15:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.060 19:15:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.060 19:15:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.060 19:15:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.060 19:15:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:14:38.060 19:15:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:14:38.060 19:15:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.060 19:15:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.060 19:15:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:38.060 19:15:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:38.060 19:15:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.060 19:15:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.060 19:15:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.060 19:15:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.060 19:15:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.061 19:15:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.061 19:15:15 -- paths/export.sh@5 -- # export PATH 00:14:38.061 19:15:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.061 19:15:15 -- nvmf/common.sh@46 -- # : 0 00:14:38.061 19:15:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:38.061 19:15:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:38.061 19:15:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:38.061 19:15:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.061 19:15:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.061 19:15:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:38.061 19:15:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:38.061 19:15:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:38.061 19:15:15 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:38.061 19:15:15 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:38.061 19:15:15 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:38.061 19:15:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:38.061 19:15:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:38.061 19:15:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:38.061 19:15:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:38.061 19:15:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:38.061 19:15:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.061 19:15:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.061 19:15:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.061 19:15:15 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:38.061 19:15:15 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:38.061 19:15:15 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:38.061 19:15:15 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:38.061 19:15:15 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:38.061 19:15:15 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:38.061 19:15:15 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:38.061 19:15:15 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:38.061 19:15:15 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:38.061 19:15:15 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:38.061 19:15:15 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:38.061 19:15:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:38.061 19:15:15 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:38.061 19:15:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:38.061 19:15:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:38.061 19:15:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:38.061 19:15:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:38.061 19:15:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:38.061 19:15:15 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:38.061 19:15:15 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:38.061 Cannot find device "nvmf_tgt_br" 00:14:38.061 19:15:15 -- nvmf/common.sh@154 -- # true 00:14:38.061 19:15:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:38.061 Cannot find device "nvmf_tgt_br2" 00:14:38.061 19:15:15 -- nvmf/common.sh@155 -- # true 00:14:38.061 19:15:15 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:38.061 19:15:15 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:38.061 Cannot find device "nvmf_tgt_br" 00:14:38.061 19:15:15 -- nvmf/common.sh@157 -- # true 00:14:38.061 19:15:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:38.061 Cannot find device "nvmf_tgt_br2" 00:14:38.061 19:15:15 -- nvmf/common.sh@158 -- # true 00:14:38.061 19:15:15 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:38.329 19:15:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:38.329 19:15:15 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:38.329 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:38.329 19:15:15 -- nvmf/common.sh@161 -- # true 00:14:38.329 19:15:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:38.329 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:38.329 19:15:15 -- nvmf/common.sh@162 -- # true 00:14:38.329 19:15:15 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:38.329 19:15:15 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:38.329 19:15:15 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:38.329 19:15:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:38.329 19:15:15 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:38.329 19:15:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:38.329 19:15:15 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:38.329 19:15:15 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:38.329 19:15:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:38.329 19:15:15 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:38.329 19:15:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:38.329 19:15:15 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:38.329 19:15:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:38.329 19:15:15 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:38.329 19:15:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:38.329 19:15:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:38.329 19:15:15 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:38.329 19:15:15 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:38.329 19:15:15 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:38.329 19:15:15 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:38.329 19:15:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:38.329 19:15:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:38.329 19:15:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:38.329 19:15:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:38.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:14:38.329 00:14:38.329 --- 10.0.0.2 ping statistics --- 00:14:38.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.329 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:14:38.329 19:15:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:38.632 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:38.632 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:14:38.632 00:14:38.632 --- 10.0.0.3 ping statistics --- 00:14:38.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.632 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:14:38.632 19:15:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:38.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:14:38.632 00:14:38.632 --- 10.0.0.1 ping statistics --- 00:14:38.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.632 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:14:38.632 19:15:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.632 19:15:15 -- nvmf/common.sh@421 -- # return 0 00:14:38.632 19:15:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:38.632 19:15:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.632 19:15:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:38.632 19:15:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:38.632 19:15:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.632 19:15:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:38.632 19:15:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:38.632 19:15:15 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:38.632 19:15:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:38.632 19:15:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:38.632 19:15:15 -- common/autotest_common.sh@10 -- # set +x 00:14:38.632 19:15:15 -- nvmf/common.sh@469 -- # nvmfpid=72867 00:14:38.632 19:15:15 -- nvmf/common.sh@470 -- # waitforlisten 72867 00:14:38.632 19:15:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:38.632 19:15:15 -- common/autotest_common.sh@817 -- # '[' -z 72867 ']' 00:14:38.632 19:15:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.632 19:15:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:38.632 19:15:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.632 19:15:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:38.632 19:15:15 -- common/autotest_common.sh@10 -- # set +x 00:14:38.632 [2024-02-14 19:15:15.837242] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:14:38.632 [2024-02-14 19:15:15.837360] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.632 [2024-02-14 19:15:15.981865] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:38.894 [2024-02-14 19:15:16.115527] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:38.894 [2024-02-14 19:15:16.115712] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.894 [2024-02-14 19:15:16.115728] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.894 [2024-02-14 19:15:16.115739] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.894 [2024-02-14 19:15:16.115977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.894 [2024-02-14 19:15:16.116660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.894 [2024-02-14 19:15:16.116759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.894 [2024-02-14 19:15:16.116765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.461 19:15:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:39.462 19:15:16 -- common/autotest_common.sh@850 -- # return 0 00:14:39.462 19:15:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:39.462 19:15:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:39.462 19:15:16 -- common/autotest_common.sh@10 -- # set +x 00:14:39.462 19:15:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.462 19:15:16 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:39.462 19:15:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.462 19:15:16 -- common/autotest_common.sh@10 -- # set +x 00:14:39.721 19:15:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.721 19:15:16 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:39.721 19:15:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.721 19:15:16 -- common/autotest_common.sh@10 -- # set +x 00:14:39.721 19:15:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.721 19:15:16 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:39.721 19:15:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.721 19:15:16 -- common/autotest_common.sh@10 -- # set +x 00:14:39.721 [2024-02-14 19:15:16.987260] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.721 19:15:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.721 19:15:16 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:39.721 19:15:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.721 19:15:16 -- common/autotest_common.sh@10 -- # set +x 00:14:39.721 Malloc0 00:14:39.721 19:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.721 19:15:17 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:39.721 19:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.721 19:15:17 -- common/autotest_common.sh@10 -- # set +x 00:14:39.721 19:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.721 19:15:17 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:39.721 19:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.721 19:15:17 -- common/autotest_common.sh@10 -- # set +x 00:14:39.721 19:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.721 19:15:17 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:39.721 19:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.721 19:15:17 -- common/autotest_common.sh@10 -- # set +x 00:14:39.721 [2024-02-14 19:15:17.049576] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.721 19:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.721 19:15:17 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=72920 00:14:39.721 19:15:17 -- target/bdev_io_wait.sh@30 -- # READ_PID=72922 00:14:39.721 19:15:17 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:39.721 19:15:17 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:39.721 19:15:17 -- nvmf/common.sh@520 -- # config=() 00:14:39.721 19:15:17 -- nvmf/common.sh@520 -- # local subsystem config 00:14:39.721 19:15:17 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=72924 00:14:39.721 19:15:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:39.721 19:15:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:39.721 { 00:14:39.721 "params": { 00:14:39.721 "name": "Nvme$subsystem", 00:14:39.721 "trtype": "$TEST_TRANSPORT", 00:14:39.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:39.721 "adrfam": "ipv4", 00:14:39.721 "trsvcid": "$NVMF_PORT", 00:14:39.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:39.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:39.721 "hdgst": ${hdgst:-false}, 00:14:39.721 "ddgst": ${ddgst:-false} 00:14:39.721 }, 00:14:39.721 "method": "bdev_nvme_attach_controller" 00:14:39.721 } 00:14:39.721 EOF 00:14:39.721 )") 00:14:39.721 19:15:17 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:39.721 19:15:17 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:39.721 19:15:17 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:39.721 19:15:17 -- nvmf/common.sh@520 -- # config=() 00:14:39.721 19:15:17 -- nvmf/common.sh@520 -- # local subsystem config 00:14:39.721 19:15:17 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:39.721 19:15:17 -- nvmf/common.sh@520 -- # config=() 00:14:39.721 19:15:17 -- nvmf/common.sh@520 -- # local subsystem config 00:14:39.721 19:15:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:39.721 19:15:17 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:39.721 19:15:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:39.721 19:15:17 -- nvmf/common.sh@542 -- # cat 00:14:39.721 19:15:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:39.721 { 00:14:39.721 "params": { 00:14:39.721 "name": "Nvme$subsystem", 00:14:39.721 "trtype": "$TEST_TRANSPORT", 00:14:39.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:39.721 "adrfam": "ipv4", 00:14:39.721 "trsvcid": "$NVMF_PORT", 00:14:39.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:39.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:39.721 "hdgst": ${hdgst:-false}, 00:14:39.721 "ddgst": ${ddgst:-false} 00:14:39.721 }, 00:14:39.721 "method": "bdev_nvme_attach_controller" 00:14:39.721 } 00:14:39.721 EOF 00:14:39.721 )") 00:14:39.721 19:15:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:39.721 { 00:14:39.721 "params": { 00:14:39.721 "name": "Nvme$subsystem", 00:14:39.721 "trtype": "$TEST_TRANSPORT", 00:14:39.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:39.721 "adrfam": "ipv4", 00:14:39.721 "trsvcid": "$NVMF_PORT", 00:14:39.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:39.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:39.721 "hdgst": ${hdgst:-false}, 00:14:39.721 "ddgst": ${ddgst:-false} 00:14:39.721 }, 00:14:39.721 "method": "bdev_nvme_attach_controller" 00:14:39.721 } 00:14:39.721 EOF 00:14:39.721 )") 00:14:39.721 19:15:17 -- nvmf/common.sh@542 -- # cat 00:14:39.721 19:15:17 -- nvmf/common.sh@542 -- # cat 00:14:39.721 19:15:17 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:39.721 19:15:17 -- nvmf/common.sh@520 -- # config=() 00:14:39.721 19:15:17 -- nvmf/common.sh@520 -- # local subsystem config 00:14:39.721 19:15:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:39.721 19:15:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:39.721 { 00:14:39.721 "params": { 00:14:39.721 "name": "Nvme$subsystem", 00:14:39.721 "trtype": "$TEST_TRANSPORT", 00:14:39.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:39.721 "adrfam": "ipv4", 00:14:39.721 "trsvcid": "$NVMF_PORT", 00:14:39.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:39.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:39.721 "hdgst": ${hdgst:-false}, 00:14:39.721 "ddgst": ${ddgst:-false} 00:14:39.721 }, 00:14:39.721 "method": "bdev_nvme_attach_controller" 00:14:39.721 } 00:14:39.721 EOF 00:14:39.721 )") 00:14:39.721 19:15:17 -- nvmf/common.sh@544 -- # jq . 00:14:39.721 19:15:17 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=72926 00:14:39.721 19:15:17 -- target/bdev_io_wait.sh@35 -- # sync 00:14:39.721 19:15:17 -- nvmf/common.sh@544 -- # jq . 00:14:39.721 19:15:17 -- nvmf/common.sh@545 -- # IFS=, 00:14:39.721 19:15:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:39.721 "params": { 00:14:39.721 "name": "Nvme1", 00:14:39.721 "trtype": "tcp", 00:14:39.721 "traddr": "10.0.0.2", 00:14:39.721 "adrfam": "ipv4", 00:14:39.721 "trsvcid": "4420", 00:14:39.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:39.721 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:39.721 "hdgst": false, 00:14:39.721 "ddgst": false 00:14:39.721 }, 00:14:39.721 "method": "bdev_nvme_attach_controller" 00:14:39.721 }' 00:14:39.721 19:15:17 -- nvmf/common.sh@542 -- # cat 00:14:39.721 19:15:17 -- nvmf/common.sh@545 -- # IFS=, 00:14:39.721 19:15:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:39.721 "params": { 00:14:39.721 "name": "Nvme1", 00:14:39.721 "trtype": "tcp", 00:14:39.721 "traddr": "10.0.0.2", 00:14:39.721 "adrfam": "ipv4", 00:14:39.721 "trsvcid": "4420", 00:14:39.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:39.721 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:39.721 "hdgst": false, 00:14:39.721 "ddgst": false 00:14:39.721 }, 00:14:39.721 "method": "bdev_nvme_attach_controller" 00:14:39.721 }' 00:14:39.721 19:15:17 -- nvmf/common.sh@544 -- # jq . 00:14:39.721 19:15:17 -- nvmf/common.sh@545 -- # IFS=, 00:14:39.721 19:15:17 -- nvmf/common.sh@544 -- # jq . 00:14:39.721 19:15:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:39.721 "params": { 00:14:39.721 "name": "Nvme1", 00:14:39.721 "trtype": "tcp", 00:14:39.721 "traddr": "10.0.0.2", 00:14:39.721 "adrfam": "ipv4", 00:14:39.721 "trsvcid": "4420", 00:14:39.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:39.721 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:39.721 "hdgst": false, 00:14:39.721 "ddgst": false 00:14:39.721 }, 00:14:39.721 "method": "bdev_nvme_attach_controller" 00:14:39.721 }' 00:14:39.721 19:15:17 -- nvmf/common.sh@545 -- # IFS=, 00:14:39.721 19:15:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:39.721 "params": { 00:14:39.721 "name": "Nvme1", 00:14:39.721 "trtype": "tcp", 00:14:39.721 "traddr": "10.0.0.2", 00:14:39.721 "adrfam": "ipv4", 00:14:39.721 "trsvcid": "4420", 00:14:39.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:39.721 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:39.721 "hdgst": false, 00:14:39.721 "ddgst": false 00:14:39.721 }, 00:14:39.721 "method": "bdev_nvme_attach_controller" 00:14:39.721 }' 00:14:39.722 [2024-02-14 19:15:17.113841] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:14:39.722 [2024-02-14 19:15:17.113845] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:14:39.722 [2024-02-14 19:15:17.113943] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:39.722 [2024-02-14 19:15:17.114230] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:39.722 19:15:17 -- target/bdev_io_wait.sh@37 -- # wait 72920 00:14:39.722 [2024-02-14 19:15:17.126702] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:14:39.722 [2024-02-14 19:15:17.127277] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:39.980 [2024-02-14 19:15:17.147341] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:14:39.980 [2024-02-14 19:15:17.147738] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:39.980 [2024-02-14 19:15:17.322316] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.240 [2024-02-14 19:15:17.400765] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.240 [2024-02-14 19:15:17.428028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:40.240 [2024-02-14 19:15:17.428381] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:14:40.240 [2024-02-14 19:15:17.471394] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.240 [2024-02-14 19:15:17.507612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:14:40.240 [2024-02-14 19:15:17.507783] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:14:40.240 [2024-02-14 19:15:17.562097] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.240 Running I/O for 1 seconds... 00:14:40.240 [2024-02-14 19:15:17.577145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:40.240 [2024-02-14 19:15:17.577367] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:14:40.499 Running I/O for 1 seconds... 00:14:40.499 [2024-02-14 19:15:17.672400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:40.499 [2024-02-14 19:15:17.672674] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:14:40.499 Running I/O for 1 seconds... 00:14:40.499 Running I/O for 1 seconds... 00:14:41.437 00:14:41.437 Latency(us) 00:14:41.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.437 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:41.437 Nvme1n1 : 1.00 199918.83 780.93 0.00 0.00 637.48 264.38 830.37 00:14:41.437 =================================================================================================================== 00:14:41.437 Total : 199918.83 780.93 0.00 0.00 637.48 264.38 830.37 00:14:41.437 [2024-02-14 19:15:18.573714] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:14:41.437 00:14:41.437 Latency(us) 00:14:41.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.437 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:41.437 Nvme1n1 : 1.01 10814.03 42.24 0.00 0.00 11794.02 6196.13 18826.71 00:14:41.437 =================================================================================================================== 00:14:41.437 Total : 10814.03 42.24 0.00 0.00 11794.02 6196.13 18826.71 00:14:41.437 [2024-02-14 19:15:18.669248] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:14:41.437 00:14:41.437 Latency(us) 00:14:41.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.437 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:41.437 Nvme1n1 : 1.01 6621.38 25.86 0.00 0.00 19210.16 11379.43 29431.62 00:14:41.437 =================================================================================================================== 00:14:41.437 Total : 6621.38 25.86 0.00 0.00 19210.16 11379.43 29431.62 00:14:41.437 [2024-02-14 19:15:18.733383] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:14:41.437 00:14:41.437 Latency(us) 00:14:41.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.437 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:41.437 Nvme1n1 : 1.01 7559.26 29.53 0.00 0.00 16857.16 5808.87 27644.28 00:14:41.437 =================================================================================================================== 00:14:41.437 Total : 7559.26 29.53 0.00 0.00 16857.16 5808.87 27644.28 00:14:41.437 [2024-02-14 19:15:18.832666] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:14:41.697 19:15:19 -- target/bdev_io_wait.sh@38 -- # wait 72922 00:14:41.697 19:15:19 -- target/bdev_io_wait.sh@39 -- # wait 72924 00:14:41.697 19:15:19 -- target/bdev_io_wait.sh@40 -- # wait 72926 00:14:41.697 19:15:19 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:41.697 19:15:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:41.697 19:15:19 -- common/autotest_common.sh@10 -- # set +x 00:14:41.956 19:15:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:41.956 19:15:19 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:41.956 19:15:19 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:41.956 19:15:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:41.956 19:15:19 -- nvmf/common.sh@116 -- # sync 00:14:41.956 19:15:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:41.956 19:15:19 -- nvmf/common.sh@119 -- # set +e 00:14:41.956 19:15:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:41.956 19:15:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:41.956 rmmod nvme_tcp 00:14:41.956 rmmod nvme_fabrics 00:14:41.956 rmmod nvme_keyring 00:14:41.956 19:15:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:41.956 19:15:19 -- nvmf/common.sh@123 -- # set -e 00:14:41.956 19:15:19 -- nvmf/common.sh@124 -- # return 0 00:14:41.956 19:15:19 -- nvmf/common.sh@477 -- # '[' -n 72867 ']' 00:14:41.956 19:15:19 -- nvmf/common.sh@478 -- # killprocess 72867 00:14:41.956 19:15:19 -- common/autotest_common.sh@924 -- # '[' -z 72867 ']' 00:14:41.956 19:15:19 -- common/autotest_common.sh@928 -- # kill -0 72867 00:14:41.956 19:15:19 -- common/autotest_common.sh@929 -- # uname 00:14:41.956 19:15:19 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:14:41.956 19:15:19 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 72867 00:14:41.956 killing process with pid 72867 00:14:41.956 19:15:19 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:14:41.956 19:15:19 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:14:41.956 19:15:19 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 72867' 00:14:41.956 19:15:19 -- common/autotest_common.sh@943 -- # kill 72867 00:14:41.956 19:15:19 -- common/autotest_common.sh@948 -- # wait 72867 00:14:42.215 19:15:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:42.215 19:15:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:42.215 19:15:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:42.215 19:15:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:42.215 19:15:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:42.215 19:15:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.215 19:15:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:42.215 19:15:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.215 19:15:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:42.215 00:14:42.215 real 0m4.318s 00:14:42.215 user 0m18.248s 00:14:42.215 sys 0m2.262s 00:14:42.215 19:15:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:42.215 19:15:19 -- common/autotest_common.sh@10 -- # set +x 00:14:42.215 ************************************ 00:14:42.215 END TEST nvmf_bdev_io_wait 00:14:42.215 ************************************ 00:14:42.215 19:15:19 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:42.215 19:15:19 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:14:42.215 19:15:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:42.215 19:15:19 -- common/autotest_common.sh@10 -- # set +x 00:14:42.215 ************************************ 00:14:42.215 START TEST nvmf_queue_depth 00:14:42.215 ************************************ 00:14:42.215 19:15:19 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:42.474 * Looking for test storage... 00:14:42.474 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:42.474 19:15:19 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:42.474 19:15:19 -- nvmf/common.sh@7 -- # uname -s 00:14:42.474 19:15:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:42.474 19:15:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:42.474 19:15:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:42.474 19:15:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:42.474 19:15:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:42.474 19:15:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:42.474 19:15:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:42.474 19:15:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:42.474 19:15:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:42.474 19:15:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:42.474 19:15:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:14:42.474 19:15:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:14:42.474 19:15:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:42.474 19:15:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:42.474 19:15:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:42.474 19:15:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:42.474 19:15:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.474 19:15:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.474 19:15:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.474 19:15:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.474 19:15:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.474 19:15:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.474 19:15:19 -- paths/export.sh@5 -- # export PATH 00:14:42.475 19:15:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.475 19:15:19 -- nvmf/common.sh@46 -- # : 0 00:14:42.475 19:15:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:42.475 19:15:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:42.475 19:15:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:42.475 19:15:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:42.475 19:15:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:42.475 19:15:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:42.475 19:15:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:42.475 19:15:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:42.475 19:15:19 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:42.475 19:15:19 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:42.475 19:15:19 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:42.475 19:15:19 -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:42.475 19:15:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:42.475 19:15:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:42.475 19:15:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:42.475 19:15:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:42.475 19:15:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:42.475 19:15:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.475 19:15:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:42.475 19:15:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.475 19:15:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:42.475 19:15:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:42.475 19:15:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:42.475 19:15:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:42.475 19:15:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:42.475 19:15:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:42.475 19:15:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:42.475 19:15:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:42.475 19:15:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:42.475 19:15:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:42.475 19:15:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:42.475 19:15:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:42.475 19:15:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:42.475 19:15:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:42.475 19:15:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:42.475 19:15:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:42.475 19:15:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:42.475 19:15:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:42.475 19:15:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:42.475 19:15:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:42.475 Cannot find device "nvmf_tgt_br" 00:14:42.475 19:15:19 -- nvmf/common.sh@154 -- # true 00:14:42.475 19:15:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:42.475 Cannot find device "nvmf_tgt_br2" 00:14:42.475 19:15:19 -- nvmf/common.sh@155 -- # true 00:14:42.475 19:15:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:42.475 19:15:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:42.475 Cannot find device "nvmf_tgt_br" 00:14:42.475 19:15:19 -- nvmf/common.sh@157 -- # true 00:14:42.475 19:15:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:42.475 Cannot find device "nvmf_tgt_br2" 00:14:42.475 19:15:19 -- nvmf/common.sh@158 -- # true 00:14:42.475 19:15:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:42.475 19:15:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:42.475 19:15:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:42.475 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:42.475 19:15:19 -- nvmf/common.sh@161 -- # true 00:14:42.475 19:15:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:42.475 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:42.475 19:15:19 -- nvmf/common.sh@162 -- # true 00:14:42.475 19:15:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:42.475 19:15:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:42.475 19:15:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:42.475 19:15:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:42.734 19:15:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:42.734 19:15:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:42.734 19:15:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:42.734 19:15:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:42.734 19:15:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:42.734 19:15:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:42.734 19:15:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:42.734 19:15:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:42.734 19:15:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:42.734 19:15:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:42.734 19:15:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:42.734 19:15:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:42.734 19:15:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:42.734 19:15:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:42.734 19:15:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:42.734 19:15:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:42.734 19:15:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:42.734 19:15:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:42.734 19:15:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:42.734 19:15:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:42.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:42.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:14:42.734 00:14:42.734 --- 10.0.0.2 ping statistics --- 00:14:42.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.734 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:14:42.734 19:15:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:42.734 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:42.734 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:14:42.734 00:14:42.734 --- 10.0.0.3 ping statistics --- 00:14:42.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.734 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:14:42.734 19:15:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:42.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:42.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:14:42.734 00:14:42.734 --- 10.0.0.1 ping statistics --- 00:14:42.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.734 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:14:42.734 19:15:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:42.734 19:15:20 -- nvmf/common.sh@421 -- # return 0 00:14:42.734 19:15:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:42.734 19:15:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:42.734 19:15:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:42.734 19:15:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:42.734 19:15:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:42.734 19:15:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:42.734 19:15:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:42.734 19:15:20 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:42.734 19:15:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:42.734 19:15:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:42.734 19:15:20 -- common/autotest_common.sh@10 -- # set +x 00:14:42.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.734 19:15:20 -- nvmf/common.sh@469 -- # nvmfpid=73159 00:14:42.734 19:15:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:42.734 19:15:20 -- nvmf/common.sh@470 -- # waitforlisten 73159 00:14:42.734 19:15:20 -- common/autotest_common.sh@817 -- # '[' -z 73159 ']' 00:14:42.734 19:15:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.734 19:15:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:42.734 19:15:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.734 19:15:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:42.734 19:15:20 -- common/autotest_common.sh@10 -- # set +x 00:14:42.734 [2024-02-14 19:15:20.142004] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:14:42.734 [2024-02-14 19:15:20.142112] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.993 [2024-02-14 19:15:20.282144] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.253 [2024-02-14 19:15:20.450907] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:43.253 [2024-02-14 19:15:20.451099] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.253 [2024-02-14 19:15:20.451115] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.253 [2024-02-14 19:15:20.451124] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.253 [2024-02-14 19:15:20.451158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.821 19:15:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:43.821 19:15:21 -- common/autotest_common.sh@850 -- # return 0 00:14:43.821 19:15:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:43.821 19:15:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:43.821 19:15:21 -- common/autotest_common.sh@10 -- # set +x 00:14:43.821 19:15:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.821 19:15:21 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:43.821 19:15:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.821 19:15:21 -- common/autotest_common.sh@10 -- # set +x 00:14:43.821 [2024-02-14 19:15:21.114893] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.821 19:15:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.821 19:15:21 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:43.821 19:15:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.821 19:15:21 -- common/autotest_common.sh@10 -- # set +x 00:14:43.821 Malloc0 00:14:43.821 19:15:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.821 19:15:21 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:43.821 19:15:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.821 19:15:21 -- common/autotest_common.sh@10 -- # set +x 00:14:43.821 19:15:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.821 19:15:21 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:43.821 19:15:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.821 19:15:21 -- common/autotest_common.sh@10 -- # set +x 00:14:43.821 19:15:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.821 19:15:21 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:43.821 19:15:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.821 19:15:21 -- common/autotest_common.sh@10 -- # set +x 00:14:43.821 [2024-02-14 19:15:21.184437] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:43.821 19:15:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.821 19:15:21 -- target/queue_depth.sh@30 -- # bdevperf_pid=73209 00:14:43.821 19:15:21 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:43.821 19:15:21 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:43.821 19:15:21 -- target/queue_depth.sh@33 -- # waitforlisten 73209 /var/tmp/bdevperf.sock 00:14:43.821 19:15:21 -- common/autotest_common.sh@817 -- # '[' -z 73209 ']' 00:14:43.821 19:15:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:43.821 19:15:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:43.821 19:15:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:43.821 19:15:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:43.821 19:15:21 -- common/autotest_common.sh@10 -- # set +x 00:14:44.080 [2024-02-14 19:15:21.248096] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:14:44.080 [2024-02-14 19:15:21.248500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73209 ] 00:14:44.080 [2024-02-14 19:15:21.392460] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.339 [2024-02-14 19:15:21.519652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.907 19:15:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:44.907 19:15:22 -- common/autotest_common.sh@850 -- # return 0 00:14:44.907 19:15:22 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:44.907 19:15:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:44.907 19:15:22 -- common/autotest_common.sh@10 -- # set +x 00:14:45.166 NVMe0n1 00:14:45.166 19:15:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:45.166 19:15:22 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:45.166 Running I/O for 10 seconds... 00:14:55.140 00:14:55.140 Latency(us) 00:14:55.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.140 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:55.140 Verification LBA range: start 0x0 length 0x4000 00:14:55.140 NVMe0n1 : 10.07 13142.94 51.34 0.00 0.00 77601.28 16681.89 59816.49 00:14:55.140 =================================================================================================================== 00:14:55.140 Total : 13142.94 51.34 0.00 0.00 77601.28 16681.89 59816.49 00:14:55.140 0 00:14:55.140 19:15:32 -- target/queue_depth.sh@39 -- # killprocess 73209 00:14:55.140 19:15:32 -- common/autotest_common.sh@924 -- # '[' -z 73209 ']' 00:14:55.140 19:15:32 -- common/autotest_common.sh@928 -- # kill -0 73209 00:14:55.140 19:15:32 -- common/autotest_common.sh@929 -- # uname 00:14:55.140 19:15:32 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:14:55.140 19:15:32 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 73209 00:14:55.140 killing process with pid 73209 00:14:55.140 Received shutdown signal, test time was about 10.000000 seconds 00:14:55.140 00:14:55.141 Latency(us) 00:14:55.141 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.141 =================================================================================================================== 00:14:55.141 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:55.141 19:15:32 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:14:55.141 19:15:32 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:14:55.141 19:15:32 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 73209' 00:14:55.141 19:15:32 -- common/autotest_common.sh@943 -- # kill 73209 00:14:55.141 19:15:32 -- common/autotest_common.sh@948 -- # wait 73209 00:14:55.399 19:15:32 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:55.399 19:15:32 -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:55.399 19:15:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:55.399 19:15:32 -- nvmf/common.sh@116 -- # sync 00:14:55.657 19:15:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:55.657 19:15:32 -- nvmf/common.sh@119 -- # set +e 00:14:55.657 19:15:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:55.657 19:15:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:55.657 rmmod nvme_tcp 00:14:55.657 rmmod nvme_fabrics 00:14:55.657 rmmod nvme_keyring 00:14:55.657 19:15:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:55.657 19:15:32 -- nvmf/common.sh@123 -- # set -e 00:14:55.657 19:15:32 -- nvmf/common.sh@124 -- # return 0 00:14:55.657 19:15:32 -- nvmf/common.sh@477 -- # '[' -n 73159 ']' 00:14:55.657 19:15:32 -- nvmf/common.sh@478 -- # killprocess 73159 00:14:55.657 19:15:32 -- common/autotest_common.sh@924 -- # '[' -z 73159 ']' 00:14:55.657 19:15:32 -- common/autotest_common.sh@928 -- # kill -0 73159 00:14:55.657 19:15:32 -- common/autotest_common.sh@929 -- # uname 00:14:55.657 19:15:32 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:14:55.657 19:15:32 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 73159 00:14:55.657 killing process with pid 73159 00:14:55.657 19:15:32 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:14:55.657 19:15:32 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:14:55.657 19:15:32 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 73159' 00:14:55.657 19:15:32 -- common/autotest_common.sh@943 -- # kill 73159 00:14:55.657 19:15:32 -- common/autotest_common.sh@948 -- # wait 73159 00:14:56.225 19:15:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:56.225 19:15:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:56.225 19:15:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:56.225 19:15:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:56.225 19:15:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:56.225 19:15:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.225 19:15:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.225 19:15:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.225 19:15:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:56.225 ************************************ 00:14:56.225 END TEST nvmf_queue_depth 00:14:56.225 ************************************ 00:14:56.225 00:14:56.225 real 0m13.791s 00:14:56.225 user 0m22.881s 00:14:56.225 sys 0m2.501s 00:14:56.225 19:15:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:56.225 19:15:33 -- common/autotest_common.sh@10 -- # set +x 00:14:56.226 19:15:33 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:56.226 19:15:33 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:14:56.226 19:15:33 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:56.226 19:15:33 -- common/autotest_common.sh@10 -- # set +x 00:14:56.226 ************************************ 00:14:56.226 START TEST nvmf_multipath 00:14:56.226 ************************************ 00:14:56.226 19:15:33 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:56.226 * Looking for test storage... 00:14:56.226 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:56.226 19:15:33 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:56.226 19:15:33 -- nvmf/common.sh@7 -- # uname -s 00:14:56.226 19:15:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.226 19:15:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.226 19:15:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.226 19:15:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.226 19:15:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.226 19:15:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.226 19:15:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.226 19:15:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.226 19:15:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.226 19:15:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.226 19:15:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:14:56.226 19:15:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:14:56.226 19:15:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.226 19:15:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.226 19:15:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:56.226 19:15:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:56.226 19:15:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.226 19:15:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.226 19:15:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.226 19:15:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.226 19:15:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.226 19:15:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.226 19:15:33 -- paths/export.sh@5 -- # export PATH 00:14:56.226 19:15:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.226 19:15:33 -- nvmf/common.sh@46 -- # : 0 00:14:56.226 19:15:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:56.226 19:15:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:56.226 19:15:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:56.226 19:15:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.226 19:15:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.226 19:15:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:56.226 19:15:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:56.226 19:15:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:56.226 19:15:33 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:56.226 19:15:33 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:56.226 19:15:33 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:56.226 19:15:33 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:56.226 19:15:33 -- target/multipath.sh@43 -- # nvmftestinit 00:14:56.226 19:15:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:56.226 19:15:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.226 19:15:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:56.226 19:15:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:56.226 19:15:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:56.226 19:15:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.226 19:15:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.226 19:15:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.226 19:15:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:56.226 19:15:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:56.226 19:15:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:56.226 19:15:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:56.226 19:15:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:56.226 19:15:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:56.226 19:15:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.226 19:15:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.226 19:15:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:56.226 19:15:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:56.226 19:15:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:56.226 19:15:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:56.226 19:15:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:56.226 19:15:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.226 19:15:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:56.226 19:15:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:56.226 19:15:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:56.226 19:15:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:56.226 19:15:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:56.226 19:15:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:56.226 Cannot find device "nvmf_tgt_br" 00:14:56.226 19:15:33 -- nvmf/common.sh@154 -- # true 00:14:56.226 19:15:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:56.226 Cannot find device "nvmf_tgt_br2" 00:14:56.226 19:15:33 -- nvmf/common.sh@155 -- # true 00:14:56.226 19:15:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:56.485 19:15:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:56.486 Cannot find device "nvmf_tgt_br" 00:14:56.486 19:15:33 -- nvmf/common.sh@157 -- # true 00:14:56.486 19:15:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:56.486 Cannot find device "nvmf_tgt_br2" 00:14:56.486 19:15:33 -- nvmf/common.sh@158 -- # true 00:14:56.486 19:15:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:56.486 19:15:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:56.486 19:15:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:56.486 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.486 19:15:33 -- nvmf/common.sh@161 -- # true 00:14:56.486 19:15:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:56.486 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.486 19:15:33 -- nvmf/common.sh@162 -- # true 00:14:56.486 19:15:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:56.486 19:15:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:56.486 19:15:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:56.486 19:15:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:56.486 19:15:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:56.486 19:15:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:56.486 19:15:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:56.486 19:15:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:56.486 19:15:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:56.486 19:15:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:56.486 19:15:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:56.486 19:15:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:56.486 19:15:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:56.486 19:15:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:56.486 19:15:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:56.486 19:15:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:56.486 19:15:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:56.486 19:15:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:56.486 19:15:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:56.486 19:15:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:56.486 19:15:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:56.486 19:15:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:56.486 19:15:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:56.745 19:15:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:56.745 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.745 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:14:56.745 00:14:56.745 --- 10.0.0.2 ping statistics --- 00:14:56.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.745 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:14:56.745 19:15:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:56.745 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:56.745 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:14:56.745 00:14:56.745 --- 10.0.0.3 ping statistics --- 00:14:56.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.745 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:14:56.745 19:15:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:56.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:14:56.745 00:14:56.745 --- 10.0.0.1 ping statistics --- 00:14:56.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.745 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:14:56.745 19:15:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.745 19:15:33 -- nvmf/common.sh@421 -- # return 0 00:14:56.745 19:15:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:56.745 19:15:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.745 19:15:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:56.745 19:15:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:56.745 19:15:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.745 19:15:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:56.745 19:15:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:56.745 19:15:33 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:14:56.745 19:15:33 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:14:56.745 19:15:33 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:14:56.745 19:15:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:56.745 19:15:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:56.745 19:15:33 -- common/autotest_common.sh@10 -- # set +x 00:14:56.745 19:15:33 -- nvmf/common.sh@469 -- # nvmfpid=73547 00:14:56.745 19:15:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:56.745 19:15:33 -- nvmf/common.sh@470 -- # waitforlisten 73547 00:14:56.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.745 19:15:33 -- common/autotest_common.sh@817 -- # '[' -z 73547 ']' 00:14:56.745 19:15:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.745 19:15:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:56.745 19:15:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.745 19:15:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:56.745 19:15:33 -- common/autotest_common.sh@10 -- # set +x 00:14:56.745 [2024-02-14 19:15:34.017106] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:14:56.745 [2024-02-14 19:15:34.017220] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.745 [2024-02-14 19:15:34.158842] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:57.004 [2024-02-14 19:15:34.282037] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:57.004 [2024-02-14 19:15:34.282196] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.004 [2024-02-14 19:15:34.282209] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.004 [2024-02-14 19:15:34.282218] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.004 [2024-02-14 19:15:34.282465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.004 [2024-02-14 19:15:34.282848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.004 [2024-02-14 19:15:34.283036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.004 [2024-02-14 19:15:34.283047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.940 19:15:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:57.940 19:15:35 -- common/autotest_common.sh@850 -- # return 0 00:14:57.940 19:15:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:57.940 19:15:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:57.940 19:15:35 -- common/autotest_common.sh@10 -- # set +x 00:14:57.940 19:15:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.940 19:15:35 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:57.940 [2024-02-14 19:15:35.311694] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.940 19:15:35 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:58.198 Malloc0 00:14:58.457 19:15:35 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:14:58.457 19:15:35 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:58.715 19:15:36 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:58.974 [2024-02-14 19:15:36.308472] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.974 19:15:36 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:59.232 [2024-02-14 19:15:36.544843] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:59.232 19:15:36 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:14:59.490 19:15:36 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:14:59.749 19:15:36 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:14:59.749 19:15:36 -- common/autotest_common.sh@1175 -- # local i=0 00:14:59.749 19:15:36 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:14:59.749 19:15:36 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:14:59.749 19:15:36 -- common/autotest_common.sh@1182 -- # sleep 2 00:15:01.652 19:15:38 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:15:01.652 19:15:38 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:15:01.652 19:15:38 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:15:01.652 19:15:39 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:15:01.652 19:15:39 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:15:01.652 19:15:39 -- common/autotest_common.sh@1185 -- # return 0 00:15:01.652 19:15:39 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:01.652 19:15:39 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:01.652 19:15:39 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:01.652 19:15:39 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:01.652 19:15:39 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:01.652 19:15:39 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:01.652 19:15:39 -- target/multipath.sh@38 -- # return 0 00:15:01.652 19:15:39 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:01.652 19:15:39 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:01.652 19:15:39 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:01.652 19:15:39 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:01.652 19:15:39 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:01.652 19:15:39 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:01.653 19:15:39 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:01.653 19:15:39 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:01.653 19:15:39 -- target/multipath.sh@22 -- # local timeout=20 00:15:01.653 19:15:39 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:01.653 19:15:39 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:01.653 19:15:39 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:01.653 19:15:39 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:01.653 19:15:39 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:01.653 19:15:39 -- target/multipath.sh@22 -- # local timeout=20 00:15:01.653 19:15:39 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:01.653 19:15:39 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:01.653 19:15:39 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:01.653 19:15:39 -- target/multipath.sh@85 -- # echo numa 00:15:01.653 19:15:39 -- target/multipath.sh@88 -- # fio_pid=73685 00:15:01.653 19:15:39 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:01.653 19:15:39 -- target/multipath.sh@90 -- # sleep 1 00:15:01.653 [global] 00:15:01.653 thread=1 00:15:01.653 invalidate=1 00:15:01.653 rw=randrw 00:15:01.653 time_based=1 00:15:01.653 runtime=6 00:15:01.653 ioengine=libaio 00:15:01.653 direct=1 00:15:01.653 bs=4096 00:15:01.653 iodepth=128 00:15:01.653 norandommap=0 00:15:01.653 numjobs=1 00:15:01.653 00:15:01.653 verify_dump=1 00:15:01.653 verify_backlog=512 00:15:01.653 verify_state_save=0 00:15:01.653 do_verify=1 00:15:01.653 verify=crc32c-intel 00:15:01.653 [job0] 00:15:01.653 filename=/dev/nvme0n1 00:15:01.653 Could not set queue depth (nvme0n1) 00:15:01.911 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:01.911 fio-3.35 00:15:01.911 Starting 1 thread 00:15:02.846 19:15:40 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:03.104 19:15:40 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:03.362 19:15:40 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:15:03.362 19:15:40 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:03.362 19:15:40 -- target/multipath.sh@22 -- # local timeout=20 00:15:03.362 19:15:40 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:03.363 19:15:40 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:03.363 19:15:40 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:03.363 19:15:40 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:15:03.363 19:15:40 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:03.363 19:15:40 -- target/multipath.sh@22 -- # local timeout=20 00:15:03.363 19:15:40 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:03.363 19:15:40 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:03.363 19:15:40 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:03.363 19:15:40 -- target/multipath.sh@25 -- # sleep 1s 00:15:04.298 19:15:41 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:04.298 19:15:41 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:04.298 19:15:41 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:04.298 19:15:41 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:04.557 19:15:41 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:04.821 19:15:42 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:15:04.821 19:15:42 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:04.821 19:15:42 -- target/multipath.sh@22 -- # local timeout=20 00:15:04.821 19:15:42 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:04.821 19:15:42 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:04.821 19:15:42 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:04.821 19:15:42 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:15:04.821 19:15:42 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:04.821 19:15:42 -- target/multipath.sh@22 -- # local timeout=20 00:15:04.821 19:15:42 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:04.821 19:15:42 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:04.821 19:15:42 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:04.821 19:15:42 -- target/multipath.sh@25 -- # sleep 1s 00:15:05.770 19:15:43 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:05.770 19:15:43 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:05.770 19:15:43 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:05.770 19:15:43 -- target/multipath.sh@104 -- # wait 73685 00:15:08.302 00:15:08.303 job0: (groupid=0, jobs=1): err= 0: pid=73706: Wed Feb 14 19:15:45 2024 00:15:08.303 read: IOPS=11.1k, BW=43.2MiB/s (45.3MB/s)(259MiB/6003msec) 00:15:08.303 slat (usec): min=4, max=5217, avg=50.55, stdev=221.62 00:15:08.303 clat (usec): min=1406, max=17654, avg=7774.32, stdev=1161.20 00:15:08.303 lat (usec): min=1417, max=17665, avg=7824.87, stdev=1168.92 00:15:08.303 clat percentiles (usec): 00:15:08.303 | 1.00th=[ 4948], 5.00th=[ 6128], 10.00th=[ 6652], 20.00th=[ 6915], 00:15:08.303 | 30.00th=[ 7177], 40.00th=[ 7373], 50.00th=[ 7701], 60.00th=[ 8029], 00:15:08.303 | 70.00th=[ 8291], 80.00th=[ 8586], 90.00th=[ 9110], 95.00th=[ 9634], 00:15:08.303 | 99.00th=[11469], 99.50th=[11731], 99.90th=[12518], 99.95th=[14091], 00:15:08.303 | 99.99th=[16450] 00:15:08.303 bw ( KiB/s): min= 9904, max=30072, per=53.94%, avg=23849.45, stdev=6366.38, samples=11 00:15:08.303 iops : min= 2476, max= 7518, avg=5962.36, stdev=1591.59, samples=11 00:15:08.303 write: IOPS=6765, BW=26.4MiB/s (27.7MB/s)(143MiB/5410msec); 0 zone resets 00:15:08.303 slat (usec): min=14, max=3833, avg=63.10, stdev=154.96 00:15:08.303 clat (usec): min=754, max=16613, avg=6706.79, stdev=935.40 00:15:08.303 lat (usec): min=783, max=17624, avg=6769.89, stdev=939.51 00:15:08.303 clat percentiles (usec): 00:15:08.303 | 1.00th=[ 3818], 5.00th=[ 5080], 10.00th=[ 5800], 20.00th=[ 6128], 00:15:08.303 | 30.00th=[ 6390], 40.00th=[ 6587], 50.00th=[ 6783], 60.00th=[ 6915], 00:15:08.303 | 70.00th=[ 7111], 80.00th=[ 7308], 90.00th=[ 7570], 95.00th=[ 7832], 00:15:08.303 | 99.00th=[ 9503], 99.50th=[10290], 99.90th=[11994], 99.95th=[13698], 00:15:08.303 | 99.99th=[16450] 00:15:08.303 bw ( KiB/s): min=10248, max=30216, per=88.23%, avg=23874.18, stdev=6131.79, samples=11 00:15:08.303 iops : min= 2562, max= 7554, avg=5968.55, stdev=1532.95, samples=11 00:15:08.303 lat (usec) : 1000=0.01% 00:15:08.303 lat (msec) : 2=0.02%, 4=0.68%, 10=96.61%, 20=2.69% 00:15:08.303 cpu : usr=5.81%, sys=23.44%, ctx=6336, majf=0, minf=133 00:15:08.303 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:15:08.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.303 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:08.303 issued rwts: total=66353,36599,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.303 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:08.303 00:15:08.303 Run status group 0 (all jobs): 00:15:08.303 READ: bw=43.2MiB/s (45.3MB/s), 43.2MiB/s-43.2MiB/s (45.3MB/s-45.3MB/s), io=259MiB (272MB), run=6003-6003msec 00:15:08.303 WRITE: bw=26.4MiB/s (27.7MB/s), 26.4MiB/s-26.4MiB/s (27.7MB/s-27.7MB/s), io=143MiB (150MB), run=5410-5410msec 00:15:08.303 00:15:08.303 Disk stats (read/write): 00:15:08.303 nvme0n1: ios=65548/35851, merge=0/0, ticks=478299/224868, in_queue=703167, util=98.63% 00:15:08.303 19:15:45 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:08.303 19:15:45 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:08.562 19:15:45 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:15:08.562 19:15:45 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:08.562 19:15:45 -- target/multipath.sh@22 -- # local timeout=20 00:15:08.562 19:15:45 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:08.562 19:15:45 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:08.562 19:15:45 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:08.562 19:15:45 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:15:08.562 19:15:45 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:08.562 19:15:45 -- target/multipath.sh@22 -- # local timeout=20 00:15:08.562 19:15:45 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:08.562 19:15:45 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:08.562 19:15:45 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:15:08.562 19:15:45 -- target/multipath.sh@25 -- # sleep 1s 00:15:09.498 19:15:46 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:09.498 19:15:46 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:09.498 19:15:46 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:09.498 19:15:46 -- target/multipath.sh@113 -- # echo round-robin 00:15:09.498 19:15:46 -- target/multipath.sh@116 -- # fio_pid=73829 00:15:09.498 19:15:46 -- target/multipath.sh@118 -- # sleep 1 00:15:09.498 19:15:46 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:09.498 [global] 00:15:09.498 thread=1 00:15:09.498 invalidate=1 00:15:09.498 rw=randrw 00:15:09.498 time_based=1 00:15:09.498 runtime=6 00:15:09.498 ioengine=libaio 00:15:09.498 direct=1 00:15:09.498 bs=4096 00:15:09.498 iodepth=128 00:15:09.498 norandommap=0 00:15:09.498 numjobs=1 00:15:09.498 00:15:09.498 verify_dump=1 00:15:09.498 verify_backlog=512 00:15:09.498 verify_state_save=0 00:15:09.498 do_verify=1 00:15:09.498 verify=crc32c-intel 00:15:09.498 [job0] 00:15:09.498 filename=/dev/nvme0n1 00:15:09.498 Could not set queue depth (nvme0n1) 00:15:09.756 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:09.756 fio-3.35 00:15:09.756 Starting 1 thread 00:15:10.692 19:15:47 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:10.692 19:15:48 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:10.951 19:15:48 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:15:10.951 19:15:48 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:10.951 19:15:48 -- target/multipath.sh@22 -- # local timeout=20 00:15:10.951 19:15:48 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:10.951 19:15:48 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:10.951 19:15:48 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:10.951 19:15:48 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:15:10.951 19:15:48 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:10.951 19:15:48 -- target/multipath.sh@22 -- # local timeout=20 00:15:10.951 19:15:48 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:10.951 19:15:48 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:10.951 19:15:48 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:10.951 19:15:48 -- target/multipath.sh@25 -- # sleep 1s 00:15:12.326 19:15:49 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:12.326 19:15:49 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:12.326 19:15:49 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:12.326 19:15:49 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:12.326 19:15:49 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:12.585 19:15:49 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:15:12.585 19:15:49 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:12.585 19:15:49 -- target/multipath.sh@22 -- # local timeout=20 00:15:12.585 19:15:49 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:12.585 19:15:49 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:12.585 19:15:49 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:12.585 19:15:49 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:15:12.585 19:15:49 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:12.585 19:15:49 -- target/multipath.sh@22 -- # local timeout=20 00:15:12.585 19:15:49 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:12.585 19:15:49 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:12.585 19:15:49 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:12.585 19:15:49 -- target/multipath.sh@25 -- # sleep 1s 00:15:13.520 19:15:50 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:13.520 19:15:50 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:13.520 19:15:50 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:13.520 19:15:50 -- target/multipath.sh@132 -- # wait 73829 00:15:16.048 00:15:16.048 job0: (groupid=0, jobs=1): err= 0: pid=73850: Wed Feb 14 19:15:53 2024 00:15:16.048 read: IOPS=11.6k, BW=45.3MiB/s (47.5MB/s)(272MiB/6006msec) 00:15:16.048 slat (usec): min=2, max=8048, avg=42.17, stdev=201.15 00:15:16.048 clat (usec): min=267, max=14998, avg=7563.54, stdev=1347.63 00:15:16.048 lat (usec): min=305, max=15009, avg=7605.72, stdev=1354.62 00:15:16.048 clat percentiles (usec): 00:15:16.048 | 1.00th=[ 3982], 5.00th=[ 5473], 10.00th=[ 6128], 20.00th=[ 6652], 00:15:16.048 | 30.00th=[ 6980], 40.00th=[ 7242], 50.00th=[ 7504], 60.00th=[ 7832], 00:15:16.048 | 70.00th=[ 8094], 80.00th=[ 8455], 90.00th=[ 8979], 95.00th=[ 9765], 00:15:16.048 | 99.00th=[11600], 99.50th=[12125], 99.90th=[13566], 99.95th=[13698], 00:15:16.048 | 99.99th=[14746] 00:15:16.048 bw ( KiB/s): min=11704, max=31280, per=52.90%, avg=24552.73, stdev=6530.03, samples=11 00:15:16.048 iops : min= 2926, max= 7820, avg=6138.18, stdev=1632.51, samples=11 00:15:16.048 write: IOPS=6869, BW=26.8MiB/s (28.1MB/s)(145MiB/5402msec); 0 zone resets 00:15:16.048 slat (usec): min=4, max=4392, avg=56.12, stdev=139.36 00:15:16.048 clat (usec): min=482, max=13076, avg=6339.92, stdev=1155.56 00:15:16.048 lat (usec): min=582, max=13103, avg=6396.04, stdev=1161.78 00:15:16.048 clat percentiles (usec): 00:15:16.048 | 1.00th=[ 3326], 5.00th=[ 4178], 10.00th=[ 4752], 20.00th=[ 5604], 00:15:16.048 | 30.00th=[ 5997], 40.00th=[ 6259], 50.00th=[ 6456], 60.00th=[ 6652], 00:15:16.049 | 70.00th=[ 6915], 80.00th=[ 7111], 90.00th=[ 7439], 95.00th=[ 7767], 00:15:16.049 | 99.00th=[ 9896], 99.50th=[10552], 99.90th=[11863], 99.95th=[11994], 00:15:16.049 | 99.99th=[12780] 00:15:16.049 bw ( KiB/s): min=12288, max=30608, per=89.39%, avg=24565.09, stdev=6405.83, samples=11 00:15:16.049 iops : min= 3072, max= 7652, avg=6141.27, stdev=1601.46, samples=11 00:15:16.049 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:15:16.049 lat (msec) : 2=0.08%, 4=1.91%, 10=94.85%, 20=3.14% 00:15:16.049 cpu : usr=5.53%, sys=25.30%, ctx=6482, majf=0, minf=114 00:15:16.049 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:15:16.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:16.049 issued rwts: total=69685,37111,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.049 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.049 00:15:16.049 Run status group 0 (all jobs): 00:15:16.049 READ: bw=45.3MiB/s (47.5MB/s), 45.3MiB/s-45.3MiB/s (47.5MB/s-47.5MB/s), io=272MiB (285MB), run=6006-6006msec 00:15:16.049 WRITE: bw=26.8MiB/s (28.1MB/s), 26.8MiB/s-26.8MiB/s (28.1MB/s-28.1MB/s), io=145MiB (152MB), run=5402-5402msec 00:15:16.049 00:15:16.049 Disk stats (read/write): 00:15:16.049 nvme0n1: ios=68708/36396, merge=0/0, ticks=485473/214188, in_queue=699661, util=98.60% 00:15:16.049 19:15:53 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:16.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:16.049 19:15:53 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:16.049 19:15:53 -- common/autotest_common.sh@1196 -- # local i=0 00:15:16.049 19:15:53 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:15:16.049 19:15:53 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:16.049 19:15:53 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:15:16.049 19:15:53 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:16.049 19:15:53 -- common/autotest_common.sh@1208 -- # return 0 00:15:16.049 19:15:53 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:16.307 19:15:53 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:15:16.307 19:15:53 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:15:16.307 19:15:53 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:15:16.307 19:15:53 -- target/multipath.sh@144 -- # nvmftestfini 00:15:16.307 19:15:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:16.307 19:15:53 -- nvmf/common.sh@116 -- # sync 00:15:16.307 19:15:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:16.307 19:15:53 -- nvmf/common.sh@119 -- # set +e 00:15:16.307 19:15:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:16.307 19:15:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:16.307 rmmod nvme_tcp 00:15:16.307 rmmod nvme_fabrics 00:15:16.307 rmmod nvme_keyring 00:15:16.307 19:15:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:16.307 19:15:53 -- nvmf/common.sh@123 -- # set -e 00:15:16.307 19:15:53 -- nvmf/common.sh@124 -- # return 0 00:15:16.307 19:15:53 -- nvmf/common.sh@477 -- # '[' -n 73547 ']' 00:15:16.307 19:15:53 -- nvmf/common.sh@478 -- # killprocess 73547 00:15:16.307 19:15:53 -- common/autotest_common.sh@924 -- # '[' -z 73547 ']' 00:15:16.307 19:15:53 -- common/autotest_common.sh@928 -- # kill -0 73547 00:15:16.307 19:15:53 -- common/autotest_common.sh@929 -- # uname 00:15:16.307 19:15:53 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:15:16.307 19:15:53 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 73547 00:15:16.565 killing process with pid 73547 00:15:16.565 19:15:53 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:15:16.565 19:15:53 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:15:16.565 19:15:53 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 73547' 00:15:16.565 19:15:53 -- common/autotest_common.sh@943 -- # kill 73547 00:15:16.565 19:15:53 -- common/autotest_common.sh@948 -- # wait 73547 00:15:16.823 19:15:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:16.823 19:15:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:16.823 19:15:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:16.823 19:15:54 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:16.823 19:15:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:16.823 19:15:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.823 19:15:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.823 19:15:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.823 19:15:54 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:16.823 ************************************ 00:15:16.823 END TEST nvmf_multipath 00:15:16.823 ************************************ 00:15:16.823 00:15:16.823 real 0m20.588s 00:15:16.823 user 1m20.696s 00:15:16.823 sys 0m6.181s 00:15:16.823 19:15:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:16.823 19:15:54 -- common/autotest_common.sh@10 -- # set +x 00:15:16.823 19:15:54 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:16.823 19:15:54 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:15:16.823 19:15:54 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:15:16.823 19:15:54 -- common/autotest_common.sh@10 -- # set +x 00:15:16.823 ************************************ 00:15:16.823 START TEST nvmf_zcopy 00:15:16.823 ************************************ 00:15:16.823 19:15:54 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:16.823 * Looking for test storage... 00:15:16.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:16.823 19:15:54 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:16.823 19:15:54 -- nvmf/common.sh@7 -- # uname -s 00:15:16.823 19:15:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.823 19:15:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.823 19:15:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.823 19:15:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.823 19:15:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.823 19:15:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.823 19:15:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.823 19:15:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.823 19:15:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.823 19:15:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.823 19:15:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:15:16.823 19:15:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:15:16.823 19:15:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.823 19:15:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.823 19:15:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:16.823 19:15:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:16.823 19:15:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.823 19:15:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.823 19:15:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.823 19:15:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.823 19:15:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.824 19:15:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.824 19:15:54 -- paths/export.sh@5 -- # export PATH 00:15:16.824 19:15:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.824 19:15:54 -- nvmf/common.sh@46 -- # : 0 00:15:16.824 19:15:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:16.824 19:15:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:16.824 19:15:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:16.824 19:15:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.824 19:15:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.824 19:15:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:16.824 19:15:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:16.824 19:15:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:16.824 19:15:54 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:16.824 19:15:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:16.824 19:15:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.824 19:15:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:16.824 19:15:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:16.824 19:15:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:16.824 19:15:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.824 19:15:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.824 19:15:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.824 19:15:54 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:16.824 19:15:54 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:16.824 19:15:54 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:16.824 19:15:54 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:16.824 19:15:54 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:16.824 19:15:54 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:16.824 19:15:54 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:16.824 19:15:54 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:16.824 19:15:54 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:16.824 19:15:54 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:16.824 19:15:54 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:16.824 19:15:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:16.824 19:15:54 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:16.824 19:15:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:16.824 19:15:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:16.824 19:15:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:16.824 19:15:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:16.824 19:15:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:16.824 19:15:54 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:17.082 19:15:54 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:17.082 Cannot find device "nvmf_tgt_br" 00:15:17.082 19:15:54 -- nvmf/common.sh@154 -- # true 00:15:17.082 19:15:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:17.082 Cannot find device "nvmf_tgt_br2" 00:15:17.082 19:15:54 -- nvmf/common.sh@155 -- # true 00:15:17.082 19:15:54 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:17.082 19:15:54 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:17.082 Cannot find device "nvmf_tgt_br" 00:15:17.082 19:15:54 -- nvmf/common.sh@157 -- # true 00:15:17.082 19:15:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:17.082 Cannot find device "nvmf_tgt_br2" 00:15:17.082 19:15:54 -- nvmf/common.sh@158 -- # true 00:15:17.082 19:15:54 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:17.082 19:15:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:17.082 19:15:54 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:17.082 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:17.082 19:15:54 -- nvmf/common.sh@161 -- # true 00:15:17.082 19:15:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:17.082 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:17.082 19:15:54 -- nvmf/common.sh@162 -- # true 00:15:17.082 19:15:54 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:17.082 19:15:54 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:17.082 19:15:54 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:17.082 19:15:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:17.082 19:15:54 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:17.082 19:15:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:17.082 19:15:54 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:17.082 19:15:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:17.082 19:15:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:17.082 19:15:54 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:17.082 19:15:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:17.082 19:15:54 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:17.082 19:15:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:17.082 19:15:54 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:17.082 19:15:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:17.082 19:15:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:17.082 19:15:54 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:17.341 19:15:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:17.341 19:15:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:17.341 19:15:54 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:17.341 19:15:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:17.341 19:15:54 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:17.341 19:15:54 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:17.341 19:15:54 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:17.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:15:17.341 00:15:17.341 --- 10.0.0.2 ping statistics --- 00:15:17.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.341 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:15:17.341 19:15:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:17.341 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:17.341 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:15:17.341 00:15:17.341 --- 10.0.0.3 ping statistics --- 00:15:17.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.341 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:15:17.341 19:15:54 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:17.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:17.341 00:15:17.341 --- 10.0.0.1 ping statistics --- 00:15:17.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.341 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:17.341 19:15:54 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.341 19:15:54 -- nvmf/common.sh@421 -- # return 0 00:15:17.341 19:15:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:17.341 19:15:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.341 19:15:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:17.341 19:15:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:17.341 19:15:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.341 19:15:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:17.341 19:15:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:17.341 19:15:54 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:17.341 19:15:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:17.341 19:15:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:17.341 19:15:54 -- common/autotest_common.sh@10 -- # set +x 00:15:17.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.341 19:15:54 -- nvmf/common.sh@469 -- # nvmfpid=74131 00:15:17.341 19:15:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:17.341 19:15:54 -- nvmf/common.sh@470 -- # waitforlisten 74131 00:15:17.341 19:15:54 -- common/autotest_common.sh@817 -- # '[' -z 74131 ']' 00:15:17.341 19:15:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.341 19:15:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:17.341 19:15:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.341 19:15:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:17.341 19:15:54 -- common/autotest_common.sh@10 -- # set +x 00:15:17.341 [2024-02-14 19:15:54.652312] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:15:17.341 [2024-02-14 19:15:54.652740] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.599 [2024-02-14 19:15:54.789860] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.599 [2024-02-14 19:15:54.943953] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:17.599 [2024-02-14 19:15:54.944136] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.599 [2024-02-14 19:15:54.944151] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.599 [2024-02-14 19:15:54.944160] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.599 [2024-02-14 19:15:54.944201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.534 19:15:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:18.534 19:15:55 -- common/autotest_common.sh@850 -- # return 0 00:15:18.534 19:15:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:18.534 19:15:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:18.534 19:15:55 -- common/autotest_common.sh@10 -- # set +x 00:15:18.534 19:15:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:18.534 19:15:55 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:18.534 19:15:55 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:18.534 19:15:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:18.534 19:15:55 -- common/autotest_common.sh@10 -- # set +x 00:15:18.534 [2024-02-14 19:15:55.659511] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:18.534 19:15:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:18.534 19:15:55 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:18.534 19:15:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:18.534 19:15:55 -- common/autotest_common.sh@10 -- # set +x 00:15:18.534 19:15:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:18.534 19:15:55 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:18.534 19:15:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:18.534 19:15:55 -- common/autotest_common.sh@10 -- # set +x 00:15:18.534 [2024-02-14 19:15:55.679656] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:18.534 19:15:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:18.534 19:15:55 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:18.534 19:15:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:18.534 19:15:55 -- common/autotest_common.sh@10 -- # set +x 00:15:18.534 19:15:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:18.534 19:15:55 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:18.535 19:15:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:18.535 19:15:55 -- common/autotest_common.sh@10 -- # set +x 00:15:18.535 malloc0 00:15:18.535 19:15:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:18.535 19:15:55 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:18.535 19:15:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:18.535 19:15:55 -- common/autotest_common.sh@10 -- # set +x 00:15:18.535 19:15:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:18.535 19:15:55 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:18.535 19:15:55 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:18.535 19:15:55 -- nvmf/common.sh@520 -- # config=() 00:15:18.535 19:15:55 -- nvmf/common.sh@520 -- # local subsystem config 00:15:18.535 19:15:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:18.535 19:15:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:18.535 { 00:15:18.535 "params": { 00:15:18.535 "name": "Nvme$subsystem", 00:15:18.535 "trtype": "$TEST_TRANSPORT", 00:15:18.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:18.535 "adrfam": "ipv4", 00:15:18.535 "trsvcid": "$NVMF_PORT", 00:15:18.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:18.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:18.535 "hdgst": ${hdgst:-false}, 00:15:18.535 "ddgst": ${ddgst:-false} 00:15:18.535 }, 00:15:18.535 "method": "bdev_nvme_attach_controller" 00:15:18.535 } 00:15:18.535 EOF 00:15:18.535 )") 00:15:18.535 19:15:55 -- nvmf/common.sh@542 -- # cat 00:15:18.535 19:15:55 -- nvmf/common.sh@544 -- # jq . 00:15:18.535 19:15:55 -- nvmf/common.sh@545 -- # IFS=, 00:15:18.535 19:15:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:18.535 "params": { 00:15:18.535 "name": "Nvme1", 00:15:18.535 "trtype": "tcp", 00:15:18.535 "traddr": "10.0.0.2", 00:15:18.535 "adrfam": "ipv4", 00:15:18.535 "trsvcid": "4420", 00:15:18.535 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.535 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:18.535 "hdgst": false, 00:15:18.535 "ddgst": false 00:15:18.535 }, 00:15:18.535 "method": "bdev_nvme_attach_controller" 00:15:18.535 }' 00:15:18.535 [2024-02-14 19:15:55.773295] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:15:18.535 [2024-02-14 19:15:55.773675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74182 ] 00:15:18.535 [2024-02-14 19:15:55.907934] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.793 [2024-02-14 19:15:56.024508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.793 [2024-02-14 19:15:56.024598] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:15:18.793 Running I/O for 10 seconds... 00:15:31.024 00:15:31.024 Latency(us) 00:15:31.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.024 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:31.024 Verification LBA range: start 0x0 length 0x1000 00:15:31.024 Nvme1n1 : 10.01 8756.63 68.41 0.00 0.00 14580.10 2025.66 22163.08 00:15:31.024 =================================================================================================================== 00:15:31.024 Total : 8756.63 68.41 0.00 0.00 14580.10 2025.66 22163.08 00:15:31.024 [2024-02-14 19:16:06.220422] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:15:31.024 19:16:06 -- target/zcopy.sh@39 -- # perfpid=74298 00:15:31.024 19:16:06 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:31.024 19:16:06 -- target/zcopy.sh@41 -- # xtrace_disable 00:15:31.024 19:16:06 -- common/autotest_common.sh@10 -- # set +x 00:15:31.024 19:16:06 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:31.024 19:16:06 -- nvmf/common.sh@520 -- # config=() 00:15:31.024 19:16:06 -- nvmf/common.sh@520 -- # local subsystem config 00:15:31.024 19:16:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:31.024 19:16:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:31.024 { 00:15:31.024 "params": { 00:15:31.024 "name": "Nvme$subsystem", 00:15:31.024 "trtype": "$TEST_TRANSPORT", 00:15:31.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:31.024 "adrfam": "ipv4", 00:15:31.024 "trsvcid": "$NVMF_PORT", 00:15:31.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:31.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:31.024 "hdgst": ${hdgst:-false}, 00:15:31.024 "ddgst": ${ddgst:-false} 00:15:31.024 }, 00:15:31.024 "method": "bdev_nvme_attach_controller" 00:15:31.024 } 00:15:31.024 EOF 00:15:31.024 )") 00:15:31.024 19:16:06 -- nvmf/common.sh@542 -- # cat 00:15:31.024 [2024-02-14 19:16:06.485361] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.024 [2024-02-14 19:16:06.485428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.024 19:16:06 -- nvmf/common.sh@544 -- # jq . 00:15:31.024 19:16:06 -- nvmf/common.sh@545 -- # IFS=, 00:15:31.024 19:16:06 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:31.024 "params": { 00:15:31.024 "name": "Nvme1", 00:15:31.024 "trtype": "tcp", 00:15:31.024 "traddr": "10.0.0.2", 00:15:31.024 "adrfam": "ipv4", 00:15:31.024 "trsvcid": "4420", 00:15:31.024 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.024 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:31.024 "hdgst": false, 00:15:31.024 "ddgst": false 00:15:31.024 }, 00:15:31.024 "method": "bdev_nvme_attach_controller" 00:15:31.024 }' 00:15:31.024 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.024 [2024-02-14 19:16:06.497323] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.024 [2024-02-14 19:16:06.497377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.024 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.024 [2024-02-14 19:16:06.509315] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.024 [2024-02-14 19:16:06.509369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.024 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.024 [2024-02-14 19:16:06.520463] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:15:31.024 [2024-02-14 19:16:06.520565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74298 ] 00:15:31.024 [2024-02-14 19:16:06.521314] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.024 [2024-02-14 19:16:06.521351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.024 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.024 [2024-02-14 19:16:06.533337] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.024 [2024-02-14 19:16:06.533758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.024 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.024 [2024-02-14 19:16:06.545331] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.024 [2024-02-14 19:16:06.545590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.024 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.024 [2024-02-14 19:16:06.557370] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.024 [2024-02-14 19:16:06.557748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.024 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.024 [2024-02-14 19:16:06.569383] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.024 [2024-02-14 19:16:06.569795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.024 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.024 [2024-02-14 19:16:06.581396] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.024 [2024-02-14 19:16:06.581798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.024 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.024 [2024-02-14 19:16:06.593349] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.024 [2024-02-14 19:16:06.593399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.024 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.024 [2024-02-14 19:16:06.605345] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.024 [2024-02-14 19:16:06.605399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.024 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.024 [2024-02-14 19:16:06.617354] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.024 [2024-02-14 19:16:06.617412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.024 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.024 [2024-02-14 19:16:06.629335] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.024 [2024-02-14 19:16:06.629382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.024 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.024 [2024-02-14 19:16:06.641347] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.024 [2024-02-14 19:16:06.641396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.024 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.024 [2024-02-14 19:16:06.653341] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.024 [2024-02-14 19:16:06.653385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.024 [2024-02-14 19:16:06.654990] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.024 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.024 [2024-02-14 19:16:06.665346] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.025 [2024-02-14 19:16:06.665389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.025 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.025 [2024-02-14 19:16:06.677346] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.025 [2024-02-14 19:16:06.677387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.025 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.025 [2024-02-14 19:16:06.689350] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.025 [2024-02-14 19:16:06.689395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.025 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.025 [2024-02-14 19:16:06.701362] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.025 [2024-02-14 19:16:06.701410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.025 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.025 [2024-02-14 19:16:06.713365] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.025 [2024-02-14 19:16:06.713416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.025 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.025 [2024-02-14 19:16:06.725346] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.025 [2024-02-14 19:16:06.725384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.025 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.025 [2024-02-14 19:16:06.737386] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.025 [2024-02-14 19:16:06.737440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.025 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.025 [2024-02-14 19:16:06.749376] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.025 [2024-02-14 19:16:06.749424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.025 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.025 [2024-02-14 19:16:06.761379] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.025 [2024-02-14 19:16:06.761430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.025 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.025 [2024-02-14 19:16:06.771924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.025 [2024-02-14 19:16:06.771994] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:15:31.025 [2024-02-14 19:16:06.773379] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.025 [2024-02-14 19:16:06.773424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.025 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.025 [2024-02-14 19:16:06.785373] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.025 [2024-02-14 19:16:06.785414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.025 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.025 [2024-02-14 19:16:06.797377] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.025 [2024-02-14 19:16:06.797419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.025 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.025 [2024-02-14 19:16:06.809382] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.025 [2024-02-14 19:16:06.809425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.025 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.025 [2024-02-14 19:16:06.821379] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.025 [2024-02-14 19:16:06.821418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.025 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.025 [2024-02-14 19:16:06.833377] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.025 [2024-02-14 19:16:06.833415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.025 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.025 [2024-02-14 19:16:06.845381] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.025 [2024-02-14 19:16:06.845427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.025 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.025 [2024-02-14 19:16:06.857393] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.025 [2024-02-14 19:16:06.857437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.025 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.025 [2024-02-14 19:16:06.869390] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.025 [2024-02-14 19:16:06.869427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.025 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.025 [2024-02-14 19:16:06.881417] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.025 [2024-02-14 19:16:06.881466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.025 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.025 [2024-02-14 19:16:06.893416] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.025 [2024-02-14 19:16:06.893458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.025 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.025 [2024-02-14 19:16:06.905433] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.025 [2024-02-14 19:16:06.905484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.025 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.025 [2024-02-14 19:16:06.917428] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.025 [2024-02-14 19:16:06.917471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.025 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.025 [2024-02-14 19:16:06.929424] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.025 [2024-02-14 19:16:06.929461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.025 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.025 [2024-02-14 19:16:06.941443] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.025 [2024-02-14 19:16:06.941507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.025 Running I/O for 5 seconds... 00:15:31.025 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.025 [2024-02-14 19:16:06.959707] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.025 [2024-02-14 19:16:06.959765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.025 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.025 [2024-02-14 19:16:06.974553] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.025 [2024-02-14 19:16:06.974604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.025 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.025 [2024-02-14 19:16:06.984823] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.026 [2024-02-14 19:16:06.984869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.026 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.026 [2024-02-14 19:16:06.999045] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.026 [2024-02-14 19:16:06.999094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.026 2024/02/14 19:16:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.026 [2024-02-14 19:16:07.009153] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.026 [2024-02-14 19:16:07.009200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.026 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.026 [2024-02-14 19:16:07.023531] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.026 [2024-02-14 19:16:07.023590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.026 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.026 [2024-02-14 19:16:07.040329] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.026 [2024-02-14 19:16:07.040391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.026 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.026 [2024-02-14 19:16:07.056822] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.026 [2024-02-14 19:16:07.056885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.026 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.026 [2024-02-14 19:16:07.073323] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.026 [2024-02-14 19:16:07.073386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.026 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.026 [2024-02-14 19:16:07.090270] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.026 [2024-02-14 19:16:07.090335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.026 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.026 [2024-02-14 19:16:07.106723] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.026 [2024-02-14 19:16:07.106781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.026 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.026 [2024-02-14 19:16:07.123649] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.026 [2024-02-14 19:16:07.123717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.026 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.026 [2024-02-14 19:16:07.138768] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.026 [2024-02-14 19:16:07.138833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.026 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.026 [2024-02-14 19:16:07.155543] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.026 [2024-02-14 19:16:07.155619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.026 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.026 [2024-02-14 19:16:07.172719] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.026 [2024-02-14 19:16:07.172793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.026 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.026 [2024-02-14 19:16:07.188630] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.026 [2024-02-14 19:16:07.188694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.026 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.026 [2024-02-14 19:16:07.207483] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.026 [2024-02-14 19:16:07.207573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.026 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.026 [2024-02-14 19:16:07.222753] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.026 [2024-02-14 19:16:07.222822] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.026 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.026 [2024-02-14 19:16:07.239536] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.026 [2024-02-14 19:16:07.239604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.026 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.026 [2024-02-14 19:16:07.255349] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.026 [2024-02-14 19:16:07.255417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.026 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.026 [2024-02-14 19:16:07.273155] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.026 [2024-02-14 19:16:07.273221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.026 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.026 [2024-02-14 19:16:07.289040] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.026 [2024-02-14 19:16:07.289106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.026 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.026 [2024-02-14 19:16:07.306860] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.026 [2024-02-14 19:16:07.306926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.026 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.026 [2024-02-14 19:16:07.321071] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.026 [2024-02-14 19:16:07.321125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.026 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.026 [2024-02-14 19:16:07.338709] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.026 [2024-02-14 19:16:07.338767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.026 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.026 [2024-02-14 19:16:07.353175] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.026 [2024-02-14 19:16:07.353237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.026 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.026 [2024-02-14 19:16:07.369766] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.026 [2024-02-14 19:16:07.369827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.026 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.026 [2024-02-14 19:16:07.385581] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.026 [2024-02-14 19:16:07.385651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.026 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.026 [2024-02-14 19:16:07.403004] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.026 [2024-02-14 19:16:07.403077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.026 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.026 [2024-02-14 19:16:07.418933] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.027 [2024-02-14 19:16:07.418996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.027 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.027 [2024-02-14 19:16:07.435970] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.027 [2024-02-14 19:16:07.436028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.027 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.027 [2024-02-14 19:16:07.452251] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.027 [2024-02-14 19:16:07.452313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.027 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.027 [2024-02-14 19:16:07.468194] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.027 [2024-02-14 19:16:07.468257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.027 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.027 [2024-02-14 19:16:07.485244] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.027 [2024-02-14 19:16:07.485302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.027 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.027 [2024-02-14 19:16:07.500224] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.027 [2024-02-14 19:16:07.500284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.027 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.027 [2024-02-14 19:16:07.516045] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.027 [2024-02-14 19:16:07.516100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.027 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.027 [2024-02-14 19:16:07.533719] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.027 [2024-02-14 19:16:07.533780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.027 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.027 [2024-02-14 19:16:07.548413] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.027 [2024-02-14 19:16:07.548482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.027 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.027 [2024-02-14 19:16:07.566219] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.027 [2024-02-14 19:16:07.566272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.027 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.027 [2024-02-14 19:16:07.581582] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.027 [2024-02-14 19:16:07.581652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.027 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.027 [2024-02-14 19:16:07.598226] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.027 [2024-02-14 19:16:07.598299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.027 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.027 [2024-02-14 19:16:07.615364] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.027 [2024-02-14 19:16:07.615435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.027 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.027 [2024-02-14 19:16:07.631070] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.027 [2024-02-14 19:16:07.631134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.027 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.027 [2024-02-14 19:16:07.647774] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.027 [2024-02-14 19:16:07.647839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.027 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.027 [2024-02-14 19:16:07.664539] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.027 [2024-02-14 19:16:07.664603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.027 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.027 [2024-02-14 19:16:07.681516] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.027 [2024-02-14 19:16:07.681579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.027 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.027 [2024-02-14 19:16:07.696743] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.027 [2024-02-14 19:16:07.696802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.027 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.027 [2024-02-14 19:16:07.706647] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.027 [2024-02-14 19:16:07.706698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.027 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.027 [2024-02-14 19:16:07.721551] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.027 [2024-02-14 19:16:07.721610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.027 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.027 [2024-02-14 19:16:07.731526] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.027 [2024-02-14 19:16:07.731575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.027 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.027 [2024-02-14 19:16:07.746505] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.027 [2024-02-14 19:16:07.746575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.027 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.027 [2024-02-14 19:16:07.755703] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.027 [2024-02-14 19:16:07.755751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.027 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.027 [2024-02-14 19:16:07.771432] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.027 [2024-02-14 19:16:07.771512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.027 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.027 [2024-02-14 19:16:07.780789] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.027 [2024-02-14 19:16:07.780843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.027 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.027 [2024-02-14 19:16:07.796663] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.027 [2024-02-14 19:16:07.796732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.027 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.027 [2024-02-14 19:16:07.807069] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.027 [2024-02-14 19:16:07.807138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.027 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.028 [2024-02-14 19:16:07.822130] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.028 [2024-02-14 19:16:07.822210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.028 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.028 [2024-02-14 19:16:07.838864] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.028 [2024-02-14 19:16:07.838952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.028 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.028 [2024-02-14 19:16:07.855066] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.028 [2024-02-14 19:16:07.855152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.028 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.028 [2024-02-14 19:16:07.872660] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.028 [2024-02-14 19:16:07.872744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.028 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.028 [2024-02-14 19:16:07.887506] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.028 [2024-02-14 19:16:07.887580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.028 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.028 [2024-02-14 19:16:07.897713] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.028 [2024-02-14 19:16:07.897779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.028 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.028 [2024-02-14 19:16:07.912523] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.028 [2024-02-14 19:16:07.912600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.028 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.028 [2024-02-14 19:16:07.929609] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.028 [2024-02-14 19:16:07.929688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.028 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.028 [2024-02-14 19:16:07.944947] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.028 [2024-02-14 19:16:07.945023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.028 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.028 [2024-02-14 19:16:07.962362] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.028 [2024-02-14 19:16:07.962447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.028 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.028 [2024-02-14 19:16:07.977405] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.028 [2024-02-14 19:16:07.977481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.028 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.028 [2024-02-14 19:16:07.994016] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.028 [2024-02-14 19:16:07.994099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.028 2024/02/14 19:16:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.028 [2024-02-14 19:16:08.011791] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.028 [2024-02-14 19:16:08.011886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.028 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.028 [2024-02-14 19:16:08.028453] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.028 [2024-02-14 19:16:08.028540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.028 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.028 [2024-02-14 19:16:08.045705] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.028 [2024-02-14 19:16:08.045786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.028 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.028 [2024-02-14 19:16:08.060279] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.028 [2024-02-14 19:16:08.060358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.028 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.028 [2024-02-14 19:16:08.077954] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.028 [2024-02-14 19:16:08.078036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.028 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.028 [2024-02-14 19:16:08.093636] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.028 [2024-02-14 19:16:08.093713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.028 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.028 [2024-02-14 19:16:08.110732] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.028 [2024-02-14 19:16:08.110809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.028 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.028 [2024-02-14 19:16:08.127138] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.028 [2024-02-14 19:16:08.127217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.028 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.028 [2024-02-14 19:16:08.143221] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.028 [2024-02-14 19:16:08.143300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.028 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.028 [2024-02-14 19:16:08.161877] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.028 [2024-02-14 19:16:08.161960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.029 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.029 [2024-02-14 19:16:08.176603] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.029 [2024-02-14 19:16:08.176673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.029 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.029 [2024-02-14 19:16:08.186660] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.029 [2024-02-14 19:16:08.186707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.029 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.029 [2024-02-14 19:16:08.201804] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.029 [2024-02-14 19:16:08.201857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.029 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.029 [2024-02-14 19:16:08.210826] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.029 [2024-02-14 19:16:08.210871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.029 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.029 [2024-02-14 19:16:08.226964] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.029 [2024-02-14 19:16:08.227018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.029 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.029 [2024-02-14 19:16:08.243770] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.029 [2024-02-14 19:16:08.243831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.029 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.029 [2024-02-14 19:16:08.261576] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.029 [2024-02-14 19:16:08.261640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.029 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.029 [2024-02-14 19:16:08.277215] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.029 [2024-02-14 19:16:08.277281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.029 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.029 [2024-02-14 19:16:08.293977] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.029 [2024-02-14 19:16:08.294048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.029 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.029 [2024-02-14 19:16:08.310741] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.029 [2024-02-14 19:16:08.310800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.029 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.029 [2024-02-14 19:16:08.325304] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.029 [2024-02-14 19:16:08.325381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.029 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.029 [2024-02-14 19:16:08.340046] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.029 [2024-02-14 19:16:08.340120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.029 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.029 [2024-02-14 19:16:08.356243] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.029 [2024-02-14 19:16:08.356320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.029 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.029 [2024-02-14 19:16:08.373581] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.029 [2024-02-14 19:16:08.373661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.029 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.029 [2024-02-14 19:16:08.389297] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.029 [2024-02-14 19:16:08.389374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.029 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.029 [2024-02-14 19:16:08.406105] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.029 [2024-02-14 19:16:08.406182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.029 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.029 [2024-02-14 19:16:08.422791] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.029 [2024-02-14 19:16:08.422868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.029 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.288 [2024-02-14 19:16:08.439215] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.288 [2024-02-14 19:16:08.439291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.288 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.288 [2024-02-14 19:16:08.457679] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.288 [2024-02-14 19:16:08.457759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.288 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.288 [2024-02-14 19:16:08.473116] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.288 [2024-02-14 19:16:08.473195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.288 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.288 [2024-02-14 19:16:08.490617] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.288 [2024-02-14 19:16:08.490699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.288 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.288 [2024-02-14 19:16:08.506455] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.288 [2024-02-14 19:16:08.506557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.288 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.288 [2024-02-14 19:16:08.524702] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.288 [2024-02-14 19:16:08.524782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.288 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.288 [2024-02-14 19:16:08.539731] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.288 [2024-02-14 19:16:08.539808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.288 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.288 [2024-02-14 19:16:08.550071] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.288 [2024-02-14 19:16:08.550154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.288 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.288 [2024-02-14 19:16:08.564616] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.288 [2024-02-14 19:16:08.564697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.288 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.288 [2024-02-14 19:16:08.582566] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.289 [2024-02-14 19:16:08.582631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.289 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.289 [2024-02-14 19:16:08.597299] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.289 [2024-02-14 19:16:08.597376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.289 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.289 [2024-02-14 19:16:08.614205] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.289 [2024-02-14 19:16:08.614267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.289 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.289 [2024-02-14 19:16:08.629213] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.289 [2024-02-14 19:16:08.629286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.289 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.289 [2024-02-14 19:16:08.645213] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.289 [2024-02-14 19:16:08.645284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.289 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.289 [2024-02-14 19:16:08.662459] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.289 [2024-02-14 19:16:08.662554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.289 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.289 [2024-02-14 19:16:08.677728] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.289 [2024-02-14 19:16:08.677803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.289 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.289 [2024-02-14 19:16:08.688202] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.289 [2024-02-14 19:16:08.688268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.289 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.289 [2024-02-14 19:16:08.703048] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.289 [2024-02-14 19:16:08.703119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.548 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.548 [2024-02-14 19:16:08.720740] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.548 [2024-02-14 19:16:08.720822] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.548 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.548 [2024-02-14 19:16:08.735149] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.548 [2024-02-14 19:16:08.735223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.548 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.548 [2024-02-14 19:16:08.752198] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.548 [2024-02-14 19:16:08.752262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.548 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.548 [2024-02-14 19:16:08.767791] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.548 [2024-02-14 19:16:08.767844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.548 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.548 [2024-02-14 19:16:08.785214] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.548 [2024-02-14 19:16:08.785289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.548 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.548 [2024-02-14 19:16:08.800985] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.548 [2024-02-14 19:16:08.801050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.548 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.548 [2024-02-14 19:16:08.810730] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.548 [2024-02-14 19:16:08.810776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.548 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.548 [2024-02-14 19:16:08.825264] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.548 [2024-02-14 19:16:08.825321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.548 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.548 [2024-02-14 19:16:08.842870] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.548 [2024-02-14 19:16:08.842942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.548 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.548 [2024-02-14 19:16:08.858071] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.548 [2024-02-14 19:16:08.858144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.548 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.548 [2024-02-14 19:16:08.874944] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.548 [2024-02-14 19:16:08.875017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.548 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.548 [2024-02-14 19:16:08.891609] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.548 [2024-02-14 19:16:08.891683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.548 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.548 [2024-02-14 19:16:08.908388] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.548 [2024-02-14 19:16:08.908455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.548 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.548 [2024-02-14 19:16:08.924442] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.548 [2024-02-14 19:16:08.924530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.548 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.548 [2024-02-14 19:16:08.942163] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.548 [2024-02-14 19:16:08.942244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.548 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.548 [2024-02-14 19:16:08.956853] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.548 [2024-02-14 19:16:08.956918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.548 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.807 [2024-02-14 19:16:08.972718] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.807 [2024-02-14 19:16:08.972782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.807 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.807 [2024-02-14 19:16:08.988897] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.807 [2024-02-14 19:16:08.988960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.807 2024/02/14 19:16:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.807 [2024-02-14 19:16:09.005845] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.807 [2024-02-14 19:16:09.005912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.807 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.807 [2024-02-14 19:16:09.021499] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.807 [2024-02-14 19:16:09.021562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.807 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.807 [2024-02-14 19:16:09.038529] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.807 [2024-02-14 19:16:09.038602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.807 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.807 [2024-02-14 19:16:09.054231] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.807 [2024-02-14 19:16:09.054310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.807 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.807 [2024-02-14 19:16:09.072469] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.807 [2024-02-14 19:16:09.072566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.807 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.807 [2024-02-14 19:16:09.087760] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.807 [2024-02-14 19:16:09.087838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.807 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.807 [2024-02-14 19:16:09.104769] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.807 [2024-02-14 19:16:09.104850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.807 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.807 [2024-02-14 19:16:09.120516] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.807 [2024-02-14 19:16:09.120596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.807 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.807 [2024-02-14 19:16:09.137675] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.807 [2024-02-14 19:16:09.137764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.807 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.807 [2024-02-14 19:16:09.153467] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.807 [2024-02-14 19:16:09.153553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.807 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.807 [2024-02-14 19:16:09.171042] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.807 [2024-02-14 19:16:09.171111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.807 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.807 [2024-02-14 19:16:09.186540] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.807 [2024-02-14 19:16:09.186609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.807 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.807 [2024-02-14 19:16:09.204467] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.807 [2024-02-14 19:16:09.204554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.807 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.807 [2024-02-14 19:16:09.220162] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.807 [2024-02-14 19:16:09.220241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.066 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.066 [2024-02-14 19:16:09.236701] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.066 [2024-02-14 19:16:09.236772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.066 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.066 [2024-02-14 19:16:09.253426] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.066 [2024-02-14 19:16:09.253508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.066 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.066 [2024-02-14 19:16:09.270177] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.066 [2024-02-14 19:16:09.270250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.066 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.066 [2024-02-14 19:16:09.286068] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.066 [2024-02-14 19:16:09.286134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.066 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.066 [2024-02-14 19:16:09.303693] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.066 [2024-02-14 19:16:09.303761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.066 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.066 [2024-02-14 19:16:09.319201] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.066 [2024-02-14 19:16:09.319262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.066 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.066 [2024-02-14 19:16:09.336709] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.066 [2024-02-14 19:16:09.336777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.066 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.066 [2024-02-14 19:16:09.351792] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.066 [2024-02-14 19:16:09.351859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.066 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.066 [2024-02-14 19:16:09.362229] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.066 [2024-02-14 19:16:09.362288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.066 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.066 [2024-02-14 19:16:09.377028] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.066 [2024-02-14 19:16:09.377090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.066 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.067 [2024-02-14 19:16:09.394230] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.067 [2024-02-14 19:16:09.394298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.067 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.067 [2024-02-14 19:16:09.410393] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.067 [2024-02-14 19:16:09.410462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.067 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.067 [2024-02-14 19:16:09.426412] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.067 [2024-02-14 19:16:09.426476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.067 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.067 [2024-02-14 19:16:09.444320] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.067 [2024-02-14 19:16:09.444387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.067 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.067 [2024-02-14 19:16:09.459725] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.067 [2024-02-14 19:16:09.459793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.067 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.067 [2024-02-14 19:16:09.476677] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.067 [2024-02-14 19:16:09.476752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.067 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.326 [2024-02-14 19:16:09.491597] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.326 [2024-02-14 19:16:09.491664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.326 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.326 [2024-02-14 19:16:09.506801] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.326 [2024-02-14 19:16:09.506866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.326 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.326 [2024-02-14 19:16:09.517298] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.326 [2024-02-14 19:16:09.517360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.326 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.326 [2024-02-14 19:16:09.531956] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.326 [2024-02-14 19:16:09.532021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.326 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.326 [2024-02-14 19:16:09.549535] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.326 [2024-02-14 19:16:09.549625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.326 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.326 [2024-02-14 19:16:09.565082] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.326 [2024-02-14 19:16:09.565169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.326 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.326 [2024-02-14 19:16:09.581931] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.326 [2024-02-14 19:16:09.582026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.326 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.326 [2024-02-14 19:16:09.598091] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.326 [2024-02-14 19:16:09.598143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.326 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.326 [2024-02-14 19:16:09.616171] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.326 [2024-02-14 19:16:09.616227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.326 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.326 [2024-02-14 19:16:09.631370] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.326 [2024-02-14 19:16:09.631424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.326 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.326 [2024-02-14 19:16:09.648853] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.327 [2024-02-14 19:16:09.648921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.327 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.327 [2024-02-14 19:16:09.665263] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.327 [2024-02-14 19:16:09.665327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.327 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.327 [2024-02-14 19:16:09.682474] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.327 [2024-02-14 19:16:09.682561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.327 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.327 [2024-02-14 19:16:09.697671] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.327 [2024-02-14 19:16:09.697731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.327 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.327 [2024-02-14 19:16:09.707861] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.327 [2024-02-14 19:16:09.707919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.327 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.327 [2024-02-14 19:16:09.722274] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.327 [2024-02-14 19:16:09.722338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.327 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.327 [2024-02-14 19:16:09.734627] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.327 [2024-02-14 19:16:09.734687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.327 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.586 [2024-02-14 19:16:09.752288] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.586 [2024-02-14 19:16:09.752355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.586 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.586 [2024-02-14 19:16:09.767185] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.586 [2024-02-14 19:16:09.767246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.586 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.586 [2024-02-14 19:16:09.777649] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.586 [2024-02-14 19:16:09.777698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.586 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.586 [2024-02-14 19:16:09.792309] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.586 [2024-02-14 19:16:09.792617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.586 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.586 [2024-02-14 19:16:09.808804] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.586 [2024-02-14 19:16:09.809031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.586 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.586 [2024-02-14 19:16:09.827546] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.586 [2024-02-14 19:16:09.827611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.586 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.586 [2024-02-14 19:16:09.842238] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.586 [2024-02-14 19:16:09.842302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.586 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.586 [2024-02-14 19:16:09.852261] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.586 [2024-02-14 19:16:09.852313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.586 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.586 [2024-02-14 19:16:09.867195] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.586 [2024-02-14 19:16:09.867256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.586 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.586 [2024-02-14 19:16:09.883999] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.586 [2024-02-14 19:16:09.884065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.586 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.586 [2024-02-14 19:16:09.900679] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.586 [2024-02-14 19:16:09.900753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.586 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.586 [2024-02-14 19:16:09.916765] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.587 [2024-02-14 19:16:09.916829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.587 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.587 [2024-02-14 19:16:09.933820] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.587 [2024-02-14 19:16:09.933879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.587 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.587 [2024-02-14 19:16:09.950195] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.587 [2024-02-14 19:16:09.950264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.587 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.587 [2024-02-14 19:16:09.969024] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.587 [2024-02-14 19:16:09.969101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.587 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.587 [2024-02-14 19:16:09.984456] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.587 [2024-02-14 19:16:09.984570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.587 2024/02/14 19:16:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.587 [2024-02-14 19:16:10.001734] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.587 [2024-02-14 19:16:10.001810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.846 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.846 [2024-02-14 19:16:10.017034] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.846 [2024-02-14 19:16:10.017106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.846 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.846 [2024-02-14 19:16:10.034801] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.846 [2024-02-14 19:16:10.034876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.846 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.847 [2024-02-14 19:16:10.049443] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.847 [2024-02-14 19:16:10.049529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.847 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.847 [2024-02-14 19:16:10.066342] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.847 [2024-02-14 19:16:10.066413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.847 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.847 [2024-02-14 19:16:10.081553] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.847 [2024-02-14 19:16:10.081613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.847 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.847 [2024-02-14 19:16:10.097439] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.847 [2024-02-14 19:16:10.097524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.847 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.847 [2024-02-14 19:16:10.114087] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.847 [2024-02-14 19:16:10.114158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.847 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.847 [2024-02-14 19:16:10.131323] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.847 [2024-02-14 19:16:10.131390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.847 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.847 [2024-02-14 19:16:10.149469] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.847 [2024-02-14 19:16:10.149557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.847 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.847 [2024-02-14 19:16:10.165024] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.847 [2024-02-14 19:16:10.165097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.847 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.847 [2024-02-14 19:16:10.181635] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.847 [2024-02-14 19:16:10.181704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.847 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.847 [2024-02-14 19:16:10.199580] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.847 [2024-02-14 19:16:10.199644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.847 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.847 [2024-02-14 19:16:10.215437] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.847 [2024-02-14 19:16:10.215511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.847 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.847 [2024-02-14 19:16:10.233189] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.847 [2024-02-14 19:16:10.233256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.847 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.847 [2024-02-14 19:16:10.247921] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.847 [2024-02-14 19:16:10.247981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.847 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.106 [2024-02-14 19:16:10.263791] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.106 [2024-02-14 19:16:10.263855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.106 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.106 [2024-02-14 19:16:10.280922] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.106 [2024-02-14 19:16:10.280991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.106 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.106 [2024-02-14 19:16:10.298305] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.106 [2024-02-14 19:16:10.298371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.107 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.107 [2024-02-14 19:16:10.313822] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.107 [2024-02-14 19:16:10.313885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.107 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.107 [2024-02-14 19:16:10.330446] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.107 [2024-02-14 19:16:10.330542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.107 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.107 [2024-02-14 19:16:10.347688] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.107 [2024-02-14 19:16:10.347762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.107 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.107 [2024-02-14 19:16:10.362289] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.107 [2024-02-14 19:16:10.362357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.107 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.107 [2024-02-14 19:16:10.378353] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.107 [2024-02-14 19:16:10.378419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.107 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.107 [2024-02-14 19:16:10.395224] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.107 [2024-02-14 19:16:10.395298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.107 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.107 [2024-02-14 19:16:10.411760] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.107 [2024-02-14 19:16:10.411830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.107 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.107 [2024-02-14 19:16:10.427451] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.107 [2024-02-14 19:16:10.427530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.107 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.107 [2024-02-14 19:16:10.445402] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.107 [2024-02-14 19:16:10.445474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.107 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.107 [2024-02-14 19:16:10.460523] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.107 [2024-02-14 19:16:10.460589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.107 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.107 [2024-02-14 19:16:10.470655] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.107 [2024-02-14 19:16:10.470710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.107 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.107 [2024-02-14 19:16:10.485743] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.107 [2024-02-14 19:16:10.485803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.107 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.107 [2024-02-14 19:16:10.502615] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.107 [2024-02-14 19:16:10.502676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.107 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.107 [2024-02-14 19:16:10.518077] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.107 [2024-02-14 19:16:10.518148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.107 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.366 [2024-02-14 19:16:10.535594] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.366 [2024-02-14 19:16:10.535671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.366 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.366 [2024-02-14 19:16:10.551327] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.366 [2024-02-14 19:16:10.551401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.366 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.366 [2024-02-14 19:16:10.568518] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.366 [2024-02-14 19:16:10.568589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.366 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.366 [2024-02-14 19:16:10.585815] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.366 [2024-02-14 19:16:10.585881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.366 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.366 [2024-02-14 19:16:10.601240] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.366 [2024-02-14 19:16:10.601300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.367 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.367 [2024-02-14 19:16:10.619080] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.367 [2024-02-14 19:16:10.619138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.367 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.367 [2024-02-14 19:16:10.634225] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.367 [2024-02-14 19:16:10.634276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.367 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.367 [2024-02-14 19:16:10.651137] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.367 [2024-02-14 19:16:10.651191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.367 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.367 [2024-02-14 19:16:10.667026] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.367 [2024-02-14 19:16:10.667080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.367 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.367 [2024-02-14 19:16:10.684594] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.367 [2024-02-14 19:16:10.684654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.367 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.367 [2024-02-14 19:16:10.701058] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.367 [2024-02-14 19:16:10.701113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.367 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.367 [2024-02-14 19:16:10.718848] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.367 [2024-02-14 19:16:10.718919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.367 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.367 [2024-02-14 19:16:10.734470] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.367 [2024-02-14 19:16:10.734557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.367 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.367 [2024-02-14 19:16:10.751175] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.367 [2024-02-14 19:16:10.751233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.367 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.367 [2024-02-14 19:16:10.768451] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.367 [2024-02-14 19:16:10.768529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.367 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.626 [2024-02-14 19:16:10.784120] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.626 [2024-02-14 19:16:10.784183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.626 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.626 [2024-02-14 19:16:10.802461] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.626 [2024-02-14 19:16:10.802539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.626 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.626 [2024-02-14 19:16:10.817575] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.626 [2024-02-14 19:16:10.817629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.626 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.626 [2024-02-14 19:16:10.835284] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.626 [2024-02-14 19:16:10.835344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.626 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.626 [2024-02-14 19:16:10.851049] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.626 [2024-02-14 19:16:10.851108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.626 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.626 [2024-02-14 19:16:10.868343] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.626 [2024-02-14 19:16:10.868409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.626 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.626 [2024-02-14 19:16:10.884016] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.626 [2024-02-14 19:16:10.884072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.626 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.626 [2024-02-14 19:16:10.901834] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.626 [2024-02-14 19:16:10.901891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.626 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.626 [2024-02-14 19:16:10.917226] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.626 [2024-02-14 19:16:10.917280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.626 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.626 [2024-02-14 19:16:10.935271] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.626 [2024-02-14 19:16:10.935334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.626 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.626 [2024-02-14 19:16:10.950414] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.626 [2024-02-14 19:16:10.950461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.626 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.626 [2024-02-14 19:16:10.967813] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.626 [2024-02-14 19:16:10.967873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.626 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.626 [2024-02-14 19:16:10.984382] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.626 [2024-02-14 19:16:10.984445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.626 2024/02/14 19:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.627 [2024-02-14 19:16:11.002993] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.627 [2024-02-14 19:16:11.003062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.627 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.627 [2024-02-14 19:16:11.018208] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.627 [2024-02-14 19:16:11.018269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.627 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.627 [2024-02-14 19:16:11.035704] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.627 [2024-02-14 19:16:11.035765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.627 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.886 [2024-02-14 19:16:11.051520] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.886 [2024-02-14 19:16:11.051584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.886 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.886 [2024-02-14 19:16:11.069675] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.886 [2024-02-14 19:16:11.069741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.886 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.886 [2024-02-14 19:16:11.084810] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.886 [2024-02-14 19:16:11.084878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.886 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.886 [2024-02-14 19:16:11.102405] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.886 [2024-02-14 19:16:11.102481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.886 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.886 [2024-02-14 19:16:11.117988] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.886 [2024-02-14 19:16:11.118056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.886 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.886 [2024-02-14 19:16:11.136419] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.886 [2024-02-14 19:16:11.136502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.886 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.886 [2024-02-14 19:16:11.151940] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.886 [2024-02-14 19:16:11.152004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.886 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.886 [2024-02-14 19:16:11.168131] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.886 [2024-02-14 19:16:11.168193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.886 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.886 [2024-02-14 19:16:11.186136] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.886 [2024-02-14 19:16:11.186209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.886 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.886 [2024-02-14 19:16:11.201336] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.886 [2024-02-14 19:16:11.201404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.886 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.886 [2024-02-14 19:16:11.219403] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.886 [2024-02-14 19:16:11.219478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.886 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.886 [2024-02-14 19:16:11.234712] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.886 [2024-02-14 19:16:11.234784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.886 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.886 [2024-02-14 19:16:11.244037] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.886 [2024-02-14 19:16:11.244098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.886 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.886 [2024-02-14 19:16:11.260054] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.886 [2024-02-14 19:16:11.260124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.886 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.886 [2024-02-14 19:16:11.270209] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.886 [2024-02-14 19:16:11.270275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.886 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.886 [2024-02-14 19:16:11.284894] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.886 [2024-02-14 19:16:11.284960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.886 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.145 [2024-02-14 19:16:11.302418] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.145 [2024-02-14 19:16:11.302496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.145 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.145 [2024-02-14 19:16:11.317785] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.145 [2024-02-14 19:16:11.317848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.145 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.145 [2024-02-14 19:16:11.336986] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.145 [2024-02-14 19:16:11.337059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.145 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.145 [2024-02-14 19:16:11.351953] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.145 [2024-02-14 19:16:11.352015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.145 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.145 [2024-02-14 19:16:11.369381] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.145 [2024-02-14 19:16:11.369448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.145 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.145 [2024-02-14 19:16:11.385263] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.145 [2024-02-14 19:16:11.385332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.145 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.145 [2024-02-14 19:16:11.402006] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.145 [2024-02-14 19:16:11.402067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.145 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.145 [2024-02-14 19:16:11.417371] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.145 [2024-02-14 19:16:11.417435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.145 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.145 [2024-02-14 19:16:11.433598] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.145 [2024-02-14 19:16:11.433668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.145 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.145 [2024-02-14 19:16:11.451443] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.145 [2024-02-14 19:16:11.451536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.145 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.145 [2024-02-14 19:16:11.467586] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.145 [2024-02-14 19:16:11.467654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.145 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.145 [2024-02-14 19:16:11.486060] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.145 [2024-02-14 19:16:11.486134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.145 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.145 [2024-02-14 19:16:11.501459] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.145 [2024-02-14 19:16:11.501540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.145 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.145 [2024-02-14 19:16:11.518258] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.145 [2024-02-14 19:16:11.518331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.145 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.145 [2024-02-14 19:16:11.533951] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.145 [2024-02-14 19:16:11.534016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.145 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.145 [2024-02-14 19:16:11.551821] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.145 [2024-02-14 19:16:11.551892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.145 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.404 [2024-02-14 19:16:11.567319] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.404 [2024-02-14 19:16:11.567386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.404 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.404 [2024-02-14 19:16:11.584207] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.404 [2024-02-14 19:16:11.584281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.404 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.404 [2024-02-14 19:16:11.601581] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.404 [2024-02-14 19:16:11.601657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.404 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.404 [2024-02-14 19:16:11.616995] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.404 [2024-02-14 19:16:11.617062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.404 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.404 [2024-02-14 19:16:11.627163] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.404 [2024-02-14 19:16:11.627222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.404 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.405 [2024-02-14 19:16:11.642936] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.405 [2024-02-14 19:16:11.642988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.405 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.405 [2024-02-14 19:16:11.659551] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.405 [2024-02-14 19:16:11.659622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.405 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.405 [2024-02-14 19:16:11.676787] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.405 [2024-02-14 19:16:11.676861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.405 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.405 [2024-02-14 19:16:11.692388] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.405 [2024-02-14 19:16:11.692454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.405 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.405 [2024-02-14 19:16:11.709134] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.405 [2024-02-14 19:16:11.709203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.405 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.405 [2024-02-14 19:16:11.726775] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.405 [2024-02-14 19:16:11.726847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.405 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.405 [2024-02-14 19:16:11.742566] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.405 [2024-02-14 19:16:11.742634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.405 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.405 [2024-02-14 19:16:11.759673] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.405 [2024-02-14 19:16:11.759745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.405 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.405 [2024-02-14 19:16:11.775735] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.405 [2024-02-14 19:16:11.775805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.405 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.405 [2024-02-14 19:16:11.793107] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.405 [2024-02-14 19:16:11.793176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.405 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.405 [2024-02-14 19:16:11.808651] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.405 [2024-02-14 19:16:11.808718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.405 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.664 [2024-02-14 19:16:11.825366] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.664 [2024-02-14 19:16:11.825438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.664 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.664 [2024-02-14 19:16:11.842520] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.664 [2024-02-14 19:16:11.842588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.664 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.664 [2024-02-14 19:16:11.857985] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.664 [2024-02-14 19:16:11.858053] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.664 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.664 [2024-02-14 19:16:11.875843] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.664 [2024-02-14 19:16:11.875917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.664 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.665 [2024-02-14 19:16:11.891058] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.665 [2024-02-14 19:16:11.891131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.665 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.665 [2024-02-14 19:16:11.908448] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.665 [2024-02-14 19:16:11.908534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.665 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.665 [2024-02-14 19:16:11.923040] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.665 [2024-02-14 19:16:11.923112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.665 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.665 [2024-02-14 19:16:11.939075] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.665 [2024-02-14 19:16:11.939144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.665 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.665 [2024-02-14 19:16:11.951711] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.665 [2024-02-14 19:16:11.951774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.665 00:15:34.665 Latency(us) 00:15:34.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:34.665 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:34.665 Nvme1n1 : 5.01 11588.33 90.53 0.00 0.00 11030.71 4736.47 23235.49 00:15:34.665 =================================================================================================================== 00:15:34.665 Total : 11588.33 90.53 0.00 0.00 11030.71 4736.47 23235.49 00:15:34.665 [2024-02-14 19:16:11.954422] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:15:34.665 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.665 [2024-02-14 19:16:11.962571] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.665 [2024-02-14 19:16:11.962627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.665 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.665 [2024-02-14 19:16:11.974568] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.665 [2024-02-14 19:16:11.974622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.665 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.665 [2024-02-14 19:16:11.986561] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.665 [2024-02-14 19:16:11.986596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.665 2024/02/14 19:16:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.665 [2024-02-14 19:16:11.998564] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.665 [2024-02-14 19:16:11.998609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.665 2024/02/14 19:16:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.665 [2024-02-14 19:16:12.010615] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.665 [2024-02-14 19:16:12.010668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.665 2024/02/14 19:16:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.665 [2024-02-14 19:16:12.022592] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.665 [2024-02-14 19:16:12.022646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.665 2024/02/14 19:16:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.665 [2024-02-14 19:16:12.034610] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.665 [2024-02-14 19:16:12.034674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.665 2024/02/14 19:16:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.665 [2024-02-14 19:16:12.046585] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.665 [2024-02-14 19:16:12.046638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.665 2024/02/14 19:16:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.665 [2024-02-14 19:16:12.058608] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.665 [2024-02-14 19:16:12.058669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.665 2024/02/14 19:16:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.665 [2024-02-14 19:16:12.070604] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.665 [2024-02-14 19:16:12.070662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.665 2024/02/14 19:16:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.942 [2024-02-14 19:16:12.082600] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.942 [2024-02-14 19:16:12.082656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.942 2024/02/14 19:16:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.942 [2024-02-14 19:16:12.094600] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.942 [2024-02-14 19:16:12.094653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.942 2024/02/14 19:16:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.943 [2024-02-14 19:16:12.106610] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.943 [2024-02-14 19:16:12.106654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.943 2024/02/14 19:16:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.943 [2024-02-14 19:16:12.118616] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.943 [2024-02-14 19:16:12.118672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.943 2024/02/14 19:16:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.943 [2024-02-14 19:16:12.130616] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.943 [2024-02-14 19:16:12.130665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.943 2024/02/14 19:16:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.943 [2024-02-14 19:16:12.142625] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.943 [2024-02-14 19:16:12.142675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.943 2024/02/14 19:16:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.943 [2024-02-14 19:16:12.154628] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.943 [2024-02-14 19:16:12.154681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.943 2024/02/14 19:16:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.943 [2024-02-14 19:16:12.166630] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.943 [2024-02-14 19:16:12.166681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.943 2024/02/14 19:16:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.943 [2024-02-14 19:16:12.178635] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.943 [2024-02-14 19:16:12.178687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.943 2024/02/14 19:16:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.943 [2024-02-14 19:16:12.190635] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.943 [2024-02-14 19:16:12.190685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.943 2024/02/14 19:16:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.943 [2024-02-14 19:16:12.202643] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.943 [2024-02-14 19:16:12.202693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.943 2024/02/14 19:16:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.943 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (74298) - No such process 00:15:34.943 19:16:12 -- target/zcopy.sh@49 -- # wait 74298 00:15:34.943 19:16:12 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:34.943 19:16:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:34.943 19:16:12 -- common/autotest_common.sh@10 -- # set +x 00:15:34.943 19:16:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:34.943 19:16:12 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:34.943 19:16:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:34.943 19:16:12 -- common/autotest_common.sh@10 -- # set +x 00:15:34.943 delay0 00:15:34.943 19:16:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:34.943 19:16:12 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:34.943 19:16:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:34.943 19:16:12 -- common/autotest_common.sh@10 -- # set +x 00:15:34.943 19:16:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:34.943 19:16:12 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:35.202 [2024-02-14 19:16:12.398596] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:41.760 Initializing NVMe Controllers 00:15:41.760 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:41.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:41.760 Initialization complete. Launching workers. 00:15:41.760 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 72 00:15:41.760 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 359, failed to submit 33 00:15:41.760 success 171, unsuccess 188, failed 0 00:15:41.760 19:16:18 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:41.760 19:16:18 -- target/zcopy.sh@60 -- # nvmftestfini 00:15:41.760 19:16:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:41.760 19:16:18 -- nvmf/common.sh@116 -- # sync 00:15:41.760 19:16:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:41.760 19:16:18 -- nvmf/common.sh@119 -- # set +e 00:15:41.760 19:16:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:41.760 19:16:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:41.760 rmmod nvme_tcp 00:15:41.760 rmmod nvme_fabrics 00:15:41.760 rmmod nvme_keyring 00:15:41.760 19:16:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:41.760 19:16:18 -- nvmf/common.sh@123 -- # set -e 00:15:41.760 19:16:18 -- nvmf/common.sh@124 -- # return 0 00:15:41.760 19:16:18 -- nvmf/common.sh@477 -- # '[' -n 74131 ']' 00:15:41.760 19:16:18 -- nvmf/common.sh@478 -- # killprocess 74131 00:15:41.760 19:16:18 -- common/autotest_common.sh@924 -- # '[' -z 74131 ']' 00:15:41.760 19:16:18 -- common/autotest_common.sh@928 -- # kill -0 74131 00:15:41.760 19:16:18 -- common/autotest_common.sh@929 -- # uname 00:15:41.760 19:16:18 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:15:41.760 19:16:18 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 74131 00:15:41.760 19:16:18 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:15:41.760 19:16:18 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:15:41.760 killing process with pid 74131 00:15:41.760 19:16:18 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 74131' 00:15:41.760 19:16:18 -- common/autotest_common.sh@943 -- # kill 74131 00:15:41.760 19:16:18 -- common/autotest_common.sh@948 -- # wait 74131 00:15:41.760 19:16:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:41.760 19:16:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:41.760 19:16:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:41.760 19:16:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:41.760 19:16:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:41.760 19:16:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.760 19:16:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.761 19:16:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.761 19:16:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:41.761 00:15:41.761 real 0m24.904s 00:15:41.761 user 0m39.369s 00:15:41.761 sys 0m7.190s 00:15:41.761 19:16:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:41.761 19:16:19 -- common/autotest_common.sh@10 -- # set +x 00:15:41.761 ************************************ 00:15:41.761 END TEST nvmf_zcopy 00:15:41.761 ************************************ 00:15:41.761 19:16:19 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:41.761 19:16:19 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:15:41.761 19:16:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:15:41.761 19:16:19 -- common/autotest_common.sh@10 -- # set +x 00:15:41.761 ************************************ 00:15:41.761 START TEST nvmf_nmic 00:15:41.761 ************************************ 00:15:41.761 19:16:19 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:41.761 * Looking for test storage... 00:15:41.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:41.761 19:16:19 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:41.761 19:16:19 -- nvmf/common.sh@7 -- # uname -s 00:15:41.761 19:16:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.761 19:16:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.761 19:16:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.761 19:16:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.761 19:16:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.761 19:16:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.761 19:16:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.761 19:16:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.761 19:16:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.761 19:16:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.761 19:16:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:15:41.761 19:16:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:15:41.761 19:16:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.761 19:16:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.761 19:16:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:41.761 19:16:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:41.761 19:16:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.761 19:16:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.761 19:16:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.761 19:16:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.761 19:16:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.761 19:16:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.761 19:16:19 -- paths/export.sh@5 -- # export PATH 00:15:41.761 19:16:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.761 19:16:19 -- nvmf/common.sh@46 -- # : 0 00:15:41.761 19:16:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:41.761 19:16:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:41.761 19:16:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:41.761 19:16:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.761 19:16:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.761 19:16:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:41.761 19:16:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:41.761 19:16:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:41.761 19:16:19 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:41.761 19:16:19 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:41.761 19:16:19 -- target/nmic.sh@14 -- # nvmftestinit 00:15:41.761 19:16:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:41.761 19:16:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:41.761 19:16:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:41.761 19:16:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:41.761 19:16:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:41.761 19:16:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.761 19:16:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.761 19:16:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.019 19:16:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:42.019 19:16:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:42.019 19:16:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:42.019 19:16:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:42.019 19:16:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:42.019 19:16:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:42.019 19:16:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:42.019 19:16:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:42.019 19:16:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:42.019 19:16:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:42.019 19:16:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:42.019 19:16:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:42.019 19:16:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:42.019 19:16:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:42.019 19:16:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:42.019 19:16:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:42.019 19:16:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:42.019 19:16:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:42.019 19:16:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:42.019 19:16:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:42.019 Cannot find device "nvmf_tgt_br" 00:15:42.019 19:16:19 -- nvmf/common.sh@154 -- # true 00:15:42.019 19:16:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:42.019 Cannot find device "nvmf_tgt_br2" 00:15:42.019 19:16:19 -- nvmf/common.sh@155 -- # true 00:15:42.019 19:16:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:42.019 19:16:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:42.019 Cannot find device "nvmf_tgt_br" 00:15:42.019 19:16:19 -- nvmf/common.sh@157 -- # true 00:15:42.019 19:16:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:42.019 Cannot find device "nvmf_tgt_br2" 00:15:42.019 19:16:19 -- nvmf/common.sh@158 -- # true 00:15:42.019 19:16:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:42.019 19:16:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:42.019 19:16:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:42.019 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:42.019 19:16:19 -- nvmf/common.sh@161 -- # true 00:15:42.019 19:16:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:42.019 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:42.019 19:16:19 -- nvmf/common.sh@162 -- # true 00:15:42.019 19:16:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:42.019 19:16:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:42.019 19:16:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:42.019 19:16:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:42.019 19:16:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:42.019 19:16:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:42.019 19:16:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:42.019 19:16:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:42.019 19:16:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:42.019 19:16:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:42.019 19:16:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:42.019 19:16:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:42.019 19:16:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:42.019 19:16:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:42.019 19:16:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:42.019 19:16:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:42.019 19:16:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:42.019 19:16:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:42.019 19:16:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:42.277 19:16:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:42.277 19:16:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:42.277 19:16:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:42.277 19:16:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:42.277 19:16:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:42.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:42.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:15:42.277 00:15:42.277 --- 10.0.0.2 ping statistics --- 00:15:42.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.277 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:15:42.277 19:16:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:42.277 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:42.277 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:15:42.277 00:15:42.277 --- 10.0.0.3 ping statistics --- 00:15:42.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.277 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:15:42.277 19:16:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:42.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:42.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:15:42.277 00:15:42.277 --- 10.0.0.1 ping statistics --- 00:15:42.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.277 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:42.277 19:16:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:42.277 19:16:19 -- nvmf/common.sh@421 -- # return 0 00:15:42.277 19:16:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:42.277 19:16:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:42.277 19:16:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:42.277 19:16:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:42.277 19:16:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:42.277 19:16:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:42.277 19:16:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:42.277 19:16:19 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:42.277 19:16:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:42.277 19:16:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:42.277 19:16:19 -- common/autotest_common.sh@10 -- # set +x 00:15:42.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.277 19:16:19 -- nvmf/common.sh@469 -- # nvmfpid=74627 00:15:42.277 19:16:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:42.277 19:16:19 -- nvmf/common.sh@470 -- # waitforlisten 74627 00:15:42.277 19:16:19 -- common/autotest_common.sh@817 -- # '[' -z 74627 ']' 00:15:42.277 19:16:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.277 19:16:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:42.277 19:16:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.277 19:16:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:42.277 19:16:19 -- common/autotest_common.sh@10 -- # set +x 00:15:42.277 [2024-02-14 19:16:19.591383] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:15:42.277 [2024-02-14 19:16:19.591741] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.536 [2024-02-14 19:16:19.726338] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:42.536 [2024-02-14 19:16:19.854395] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:42.536 [2024-02-14 19:16:19.854853] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.536 [2024-02-14 19:16:19.854999] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.536 [2024-02-14 19:16:19.855122] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.536 [2024-02-14 19:16:19.855397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.536 [2024-02-14 19:16:19.855516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:42.536 [2024-02-14 19:16:19.855642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:42.536 [2024-02-14 19:16:19.855650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.472 19:16:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:43.473 19:16:20 -- common/autotest_common.sh@850 -- # return 0 00:15:43.473 19:16:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:43.473 19:16:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:43.473 19:16:20 -- common/autotest_common.sh@10 -- # set +x 00:15:43.473 19:16:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:43.473 19:16:20 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:43.473 19:16:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.473 19:16:20 -- common/autotest_common.sh@10 -- # set +x 00:15:43.473 [2024-02-14 19:16:20.594570] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:43.473 19:16:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:43.473 19:16:20 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:43.473 19:16:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.473 19:16:20 -- common/autotest_common.sh@10 -- # set +x 00:15:43.473 Malloc0 00:15:43.473 19:16:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:43.473 19:16:20 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:43.473 19:16:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.473 19:16:20 -- common/autotest_common.sh@10 -- # set +x 00:15:43.473 19:16:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:43.473 19:16:20 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:43.473 19:16:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.473 19:16:20 -- common/autotest_common.sh@10 -- # set +x 00:15:43.473 19:16:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:43.473 19:16:20 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:43.473 19:16:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.473 19:16:20 -- common/autotest_common.sh@10 -- # set +x 00:15:43.473 [2024-02-14 19:16:20.667681] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:43.473 test case1: single bdev can't be used in multiple subsystems 00:15:43.473 19:16:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:43.473 19:16:20 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:43.473 19:16:20 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:43.473 19:16:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.473 19:16:20 -- common/autotest_common.sh@10 -- # set +x 00:15:43.473 19:16:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:43.473 19:16:20 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:43.473 19:16:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.473 19:16:20 -- common/autotest_common.sh@10 -- # set +x 00:15:43.473 19:16:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:43.473 19:16:20 -- target/nmic.sh@28 -- # nmic_status=0 00:15:43.473 19:16:20 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:43.473 19:16:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.473 19:16:20 -- common/autotest_common.sh@10 -- # set +x 00:15:43.473 [2024-02-14 19:16:20.691550] bdev.c:7935:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:43.473 [2024-02-14 19:16:20.691591] subsystem.c:1779:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:43.473 [2024-02-14 19:16:20.691604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.473 2024/02/14 19:16:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.473 request: 00:15:43.473 { 00:15:43.473 "method": "nvmf_subsystem_add_ns", 00:15:43.473 "params": { 00:15:43.473 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:43.473 "namespace": { 00:15:43.473 "bdev_name": "Malloc0" 00:15:43.473 } 00:15:43.473 } 00:15:43.473 } 00:15:43.473 Got JSON-RPC error response 00:15:43.473 GoRPCClient: error on JSON-RPC call 00:15:43.473 19:16:20 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:15:43.473 19:16:20 -- target/nmic.sh@29 -- # nmic_status=1 00:15:43.473 19:16:20 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:43.473 19:16:20 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:43.473 Adding namespace failed - expected result. 00:15:43.473 19:16:20 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:43.473 test case2: host connect to nvmf target in multiple paths 00:15:43.473 19:16:20 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:43.473 19:16:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.473 19:16:20 -- common/autotest_common.sh@10 -- # set +x 00:15:43.473 [2024-02-14 19:16:20.707744] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:43.473 19:16:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:43.473 19:16:20 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:43.473 19:16:20 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:43.732 19:16:21 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:43.732 19:16:21 -- common/autotest_common.sh@1175 -- # local i=0 00:15:43.732 19:16:21 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:15:43.732 19:16:21 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:15:43.732 19:16:21 -- common/autotest_common.sh@1182 -- # sleep 2 00:15:45.638 19:16:23 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:15:45.638 19:16:23 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:15:45.638 19:16:23 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:15:45.897 19:16:23 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:15:45.897 19:16:23 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:15:45.897 19:16:23 -- common/autotest_common.sh@1185 -- # return 0 00:15:45.897 19:16:23 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:45.897 [global] 00:15:45.897 thread=1 00:15:45.897 invalidate=1 00:15:45.897 rw=write 00:15:45.897 time_based=1 00:15:45.897 runtime=1 00:15:45.897 ioengine=libaio 00:15:45.897 direct=1 00:15:45.897 bs=4096 00:15:45.897 iodepth=1 00:15:45.897 norandommap=0 00:15:45.897 numjobs=1 00:15:45.897 00:15:45.897 verify_dump=1 00:15:45.897 verify_backlog=512 00:15:45.897 verify_state_save=0 00:15:45.897 do_verify=1 00:15:45.897 verify=crc32c-intel 00:15:45.897 [job0] 00:15:45.897 filename=/dev/nvme0n1 00:15:45.897 Could not set queue depth (nvme0n1) 00:15:45.897 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:45.897 fio-3.35 00:15:45.897 Starting 1 thread 00:15:47.274 00:15:47.274 job0: (groupid=0, jobs=1): err= 0: pid=74737: Wed Feb 14 19:16:24 2024 00:15:47.274 read: IOPS=2908, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1001msec) 00:15:47.274 slat (nsec): min=13989, max=61442, avg=17028.70, stdev=4257.40 00:15:47.274 clat (usec): min=130, max=935, avg=168.70, stdev=26.69 00:15:47.274 lat (usec): min=146, max=952, avg=185.73, stdev=27.14 00:15:47.274 clat percentiles (usec): 00:15:47.274 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:15:47.274 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 169], 00:15:47.274 | 70.00th=[ 174], 80.00th=[ 182], 90.00th=[ 192], 95.00th=[ 204], 00:15:47.274 | 99.00th=[ 229], 99.50th=[ 269], 99.90th=[ 510], 99.95th=[ 553], 00:15:47.274 | 99.99th=[ 938] 00:15:47.274 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:15:47.274 slat (usec): min=20, max=178, avg=25.89, stdev= 7.46 00:15:47.274 clat (usec): min=94, max=592, avg=120.26, stdev=20.50 00:15:47.274 lat (usec): min=117, max=614, avg=146.15, stdev=23.05 00:15:47.274 clat percentiles (usec): 00:15:47.274 | 1.00th=[ 99], 5.00th=[ 102], 10.00th=[ 104], 20.00th=[ 108], 00:15:47.274 | 30.00th=[ 111], 40.00th=[ 114], 50.00th=[ 117], 60.00th=[ 120], 00:15:47.274 | 70.00th=[ 125], 80.00th=[ 131], 90.00th=[ 141], 95.00th=[ 151], 00:15:47.274 | 99.00th=[ 169], 99.50th=[ 178], 99.90th=[ 306], 99.95th=[ 537], 00:15:47.274 | 99.99th=[ 594] 00:15:47.274 bw ( KiB/s): min=12190, max=12190, per=99.30%, avg=12190.00, stdev= 0.00, samples=1 00:15:47.274 iops : min= 3047, max= 3047, avg=3047.00, stdev= 0.00, samples=1 00:15:47.274 lat (usec) : 100=1.07%, 250=98.55%, 500=0.30%, 750=0.07%, 1000=0.02% 00:15:47.274 cpu : usr=2.50%, sys=9.00%, ctx=5983, majf=0, minf=2 00:15:47.274 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:47.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.274 issued rwts: total=2911,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.274 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:47.274 00:15:47.274 Run status group 0 (all jobs): 00:15:47.274 READ: bw=11.4MiB/s (11.9MB/s), 11.4MiB/s-11.4MiB/s (11.9MB/s-11.9MB/s), io=11.4MiB (11.9MB), run=1001-1001msec 00:15:47.274 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:15:47.274 00:15:47.274 Disk stats (read/write): 00:15:47.274 nvme0n1: ios=2610/2818, merge=0/0, ticks=491/394, in_queue=885, util=91.38% 00:15:47.274 19:16:24 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:47.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:47.274 19:16:24 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:47.274 19:16:24 -- common/autotest_common.sh@1196 -- # local i=0 00:15:47.274 19:16:24 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:15:47.274 19:16:24 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:47.274 19:16:24 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:15:47.274 19:16:24 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:47.274 19:16:24 -- common/autotest_common.sh@1208 -- # return 0 00:15:47.274 19:16:24 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:47.274 19:16:24 -- target/nmic.sh@53 -- # nvmftestfini 00:15:47.274 19:16:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:47.274 19:16:24 -- nvmf/common.sh@116 -- # sync 00:15:47.274 19:16:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:47.274 19:16:24 -- nvmf/common.sh@119 -- # set +e 00:15:47.274 19:16:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:47.274 19:16:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:47.274 rmmod nvme_tcp 00:15:47.274 rmmod nvme_fabrics 00:15:47.274 rmmod nvme_keyring 00:15:47.274 19:16:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:47.274 19:16:24 -- nvmf/common.sh@123 -- # set -e 00:15:47.274 19:16:24 -- nvmf/common.sh@124 -- # return 0 00:15:47.274 19:16:24 -- nvmf/common.sh@477 -- # '[' -n 74627 ']' 00:15:47.274 19:16:24 -- nvmf/common.sh@478 -- # killprocess 74627 00:15:47.274 19:16:24 -- common/autotest_common.sh@924 -- # '[' -z 74627 ']' 00:15:47.274 19:16:24 -- common/autotest_common.sh@928 -- # kill -0 74627 00:15:47.274 19:16:24 -- common/autotest_common.sh@929 -- # uname 00:15:47.274 19:16:24 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:15:47.274 19:16:24 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 74627 00:15:47.274 killing process with pid 74627 00:15:47.274 19:16:24 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:15:47.274 19:16:24 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:15:47.274 19:16:24 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 74627' 00:15:47.274 19:16:24 -- common/autotest_common.sh@943 -- # kill 74627 00:15:47.274 19:16:24 -- common/autotest_common.sh@948 -- # wait 74627 00:15:47.532 19:16:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:47.532 19:16:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:47.532 19:16:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:47.532 19:16:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:47.532 19:16:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:47.532 19:16:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.532 19:16:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:47.532 19:16:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.532 19:16:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:47.532 00:15:47.532 real 0m5.841s 00:15:47.532 user 0m19.465s 00:15:47.532 sys 0m1.353s 00:15:47.532 19:16:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:47.532 19:16:24 -- common/autotest_common.sh@10 -- # set +x 00:15:47.532 ************************************ 00:15:47.532 END TEST nvmf_nmic 00:15:47.532 ************************************ 00:15:47.792 19:16:24 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:47.792 19:16:24 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:15:47.792 19:16:24 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:15:47.792 19:16:24 -- common/autotest_common.sh@10 -- # set +x 00:15:47.792 ************************************ 00:15:47.792 START TEST nvmf_fio_target 00:15:47.792 ************************************ 00:15:47.792 19:16:24 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:47.792 * Looking for test storage... 00:15:47.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:47.792 19:16:25 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:47.792 19:16:25 -- nvmf/common.sh@7 -- # uname -s 00:15:47.792 19:16:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:47.792 19:16:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:47.792 19:16:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:47.792 19:16:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:47.792 19:16:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:47.792 19:16:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:47.792 19:16:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:47.792 19:16:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:47.792 19:16:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:47.792 19:16:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.792 19:16:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:15:47.792 19:16:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:15:47.792 19:16:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.792 19:16:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.792 19:16:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:47.792 19:16:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:47.792 19:16:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.792 19:16:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.792 19:16:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.792 19:16:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.792 19:16:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.792 19:16:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.792 19:16:25 -- paths/export.sh@5 -- # export PATH 00:15:47.792 19:16:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.792 19:16:25 -- nvmf/common.sh@46 -- # : 0 00:15:47.792 19:16:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:47.792 19:16:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:47.792 19:16:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:47.792 19:16:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.792 19:16:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.792 19:16:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:47.792 19:16:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:47.792 19:16:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:47.792 19:16:25 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:47.792 19:16:25 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:47.792 19:16:25 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:47.792 19:16:25 -- target/fio.sh@16 -- # nvmftestinit 00:15:47.792 19:16:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:47.792 19:16:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:47.792 19:16:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:47.792 19:16:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:47.792 19:16:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:47.792 19:16:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.792 19:16:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:47.792 19:16:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.792 19:16:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:47.792 19:16:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:47.792 19:16:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:47.792 19:16:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:47.792 19:16:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:47.792 19:16:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:47.792 19:16:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:47.792 19:16:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:47.792 19:16:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:47.792 19:16:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:47.792 19:16:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:47.792 19:16:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:47.792 19:16:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:47.792 19:16:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:47.792 19:16:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:47.792 19:16:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:47.792 19:16:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:47.792 19:16:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:47.792 19:16:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:47.792 19:16:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:47.792 Cannot find device "nvmf_tgt_br" 00:15:47.792 19:16:25 -- nvmf/common.sh@154 -- # true 00:15:47.792 19:16:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:47.792 Cannot find device "nvmf_tgt_br2" 00:15:47.792 19:16:25 -- nvmf/common.sh@155 -- # true 00:15:47.792 19:16:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:47.792 19:16:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:47.792 Cannot find device "nvmf_tgt_br" 00:15:47.792 19:16:25 -- nvmf/common.sh@157 -- # true 00:15:47.792 19:16:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:47.792 Cannot find device "nvmf_tgt_br2" 00:15:47.792 19:16:25 -- nvmf/common.sh@158 -- # true 00:15:47.792 19:16:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:47.792 19:16:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:48.052 19:16:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:48.052 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.052 19:16:25 -- nvmf/common.sh@161 -- # true 00:15:48.052 19:16:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:48.052 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.052 19:16:25 -- nvmf/common.sh@162 -- # true 00:15:48.052 19:16:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:48.052 19:16:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:48.052 19:16:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:48.052 19:16:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:48.052 19:16:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:48.052 19:16:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:48.052 19:16:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:48.052 19:16:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:48.052 19:16:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:48.052 19:16:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:48.052 19:16:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:48.052 19:16:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:48.052 19:16:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:48.052 19:16:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:48.052 19:16:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:48.052 19:16:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:48.052 19:16:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:48.052 19:16:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:48.052 19:16:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:48.052 19:16:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:48.052 19:16:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:48.052 19:16:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:48.052 19:16:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:48.052 19:16:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:48.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:48.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:15:48.052 00:15:48.052 --- 10.0.0.2 ping statistics --- 00:15:48.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.052 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:15:48.052 19:16:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:48.052 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:48.052 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:15:48.052 00:15:48.052 --- 10.0.0.3 ping statistics --- 00:15:48.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.052 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:15:48.052 19:16:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:48.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:48.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:48.052 00:15:48.052 --- 10.0.0.1 ping statistics --- 00:15:48.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.052 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:48.052 19:16:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:48.052 19:16:25 -- nvmf/common.sh@421 -- # return 0 00:15:48.052 19:16:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:48.052 19:16:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:48.052 19:16:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:48.052 19:16:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:48.052 19:16:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:48.052 19:16:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:48.052 19:16:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:48.312 19:16:25 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:48.312 19:16:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:48.312 19:16:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:48.312 19:16:25 -- common/autotest_common.sh@10 -- # set +x 00:15:48.312 19:16:25 -- nvmf/common.sh@469 -- # nvmfpid=74913 00:15:48.312 19:16:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:48.312 19:16:25 -- nvmf/common.sh@470 -- # waitforlisten 74913 00:15:48.312 19:16:25 -- common/autotest_common.sh@817 -- # '[' -z 74913 ']' 00:15:48.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.312 19:16:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.312 19:16:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:48.312 19:16:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.312 19:16:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:48.312 19:16:25 -- common/autotest_common.sh@10 -- # set +x 00:15:48.312 [2024-02-14 19:16:25.548007] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:15:48.312 [2024-02-14 19:16:25.548378] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:48.312 [2024-02-14 19:16:25.692710] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:48.571 [2024-02-14 19:16:25.829085] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:48.571 [2024-02-14 19:16:25.829267] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:48.571 [2024-02-14 19:16:25.829285] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:48.571 [2024-02-14 19:16:25.829296] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:48.571 [2024-02-14 19:16:25.829514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.571 [2024-02-14 19:16:25.830283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:48.571 [2024-02-14 19:16:25.830464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:48.571 [2024-02-14 19:16:25.830476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.139 19:16:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:49.139 19:16:26 -- common/autotest_common.sh@850 -- # return 0 00:15:49.139 19:16:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:49.139 19:16:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:49.139 19:16:26 -- common/autotest_common.sh@10 -- # set +x 00:15:49.139 19:16:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.139 19:16:26 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:49.398 [2024-02-14 19:16:26.804362] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.657 19:16:26 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:49.916 19:16:27 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:49.916 19:16:27 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:50.174 19:16:27 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:50.174 19:16:27 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:50.434 19:16:27 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:50.434 19:16:27 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:50.692 19:16:28 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:50.692 19:16:28 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:50.952 19:16:28 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:51.211 19:16:28 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:51.211 19:16:28 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:51.470 19:16:28 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:51.470 19:16:28 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:51.729 19:16:29 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:51.729 19:16:29 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:51.988 19:16:29 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:52.248 19:16:29 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:52.248 19:16:29 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:52.507 19:16:29 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:52.507 19:16:29 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:52.765 19:16:29 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:53.025 [2024-02-14 19:16:30.182664] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:53.025 19:16:30 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:53.025 19:16:30 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:53.284 19:16:30 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:53.542 19:16:30 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:53.542 19:16:30 -- common/autotest_common.sh@1175 -- # local i=0 00:15:53.542 19:16:30 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:15:53.542 19:16:30 -- common/autotest_common.sh@1177 -- # [[ -n 4 ]] 00:15:53.542 19:16:30 -- common/autotest_common.sh@1178 -- # nvme_device_counter=4 00:15:53.542 19:16:30 -- common/autotest_common.sh@1182 -- # sleep 2 00:15:55.446 19:16:32 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:15:55.446 19:16:32 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:15:55.446 19:16:32 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:15:55.446 19:16:32 -- common/autotest_common.sh@1184 -- # nvme_devices=4 00:15:55.446 19:16:32 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:15:55.446 19:16:32 -- common/autotest_common.sh@1185 -- # return 0 00:15:55.446 19:16:32 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:55.704 [global] 00:15:55.704 thread=1 00:15:55.704 invalidate=1 00:15:55.704 rw=write 00:15:55.704 time_based=1 00:15:55.704 runtime=1 00:15:55.704 ioengine=libaio 00:15:55.704 direct=1 00:15:55.704 bs=4096 00:15:55.704 iodepth=1 00:15:55.704 norandommap=0 00:15:55.704 numjobs=1 00:15:55.704 00:15:55.704 verify_dump=1 00:15:55.704 verify_backlog=512 00:15:55.704 verify_state_save=0 00:15:55.704 do_verify=1 00:15:55.704 verify=crc32c-intel 00:15:55.704 [job0] 00:15:55.704 filename=/dev/nvme0n1 00:15:55.704 [job1] 00:15:55.704 filename=/dev/nvme0n2 00:15:55.704 [job2] 00:15:55.704 filename=/dev/nvme0n3 00:15:55.704 [job3] 00:15:55.704 filename=/dev/nvme0n4 00:15:55.704 Could not set queue depth (nvme0n1) 00:15:55.704 Could not set queue depth (nvme0n2) 00:15:55.704 Could not set queue depth (nvme0n3) 00:15:55.704 Could not set queue depth (nvme0n4) 00:15:55.704 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:55.704 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:55.704 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:55.704 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:55.704 fio-3.35 00:15:55.704 Starting 4 threads 00:15:57.078 00:15:57.078 job0: (groupid=0, jobs=1): err= 0: pid=75201: Wed Feb 14 19:16:34 2024 00:15:57.078 read: IOPS=2822, BW=11.0MiB/s (11.6MB/s)(11.0MiB/1001msec) 00:15:57.078 slat (nsec): min=13970, max=58877, avg=15734.95, stdev=1921.58 00:15:57.078 clat (usec): min=141, max=244, avg=168.23, stdev=10.43 00:15:57.078 lat (usec): min=155, max=260, avg=183.97, stdev=10.69 00:15:57.079 clat percentiles (usec): 00:15:57.079 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 159], 00:15:57.079 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 169], 00:15:57.079 | 70.00th=[ 174], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 188], 00:15:57.079 | 99.00th=[ 196], 99.50th=[ 200], 99.90th=[ 229], 99.95th=[ 241], 00:15:57.079 | 99.99th=[ 245] 00:15:57.079 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:15:57.079 slat (usec): min=20, max=209, avg=23.72, stdev= 5.30 00:15:57.079 clat (usec): min=100, max=995, avg=129.67, stdev=23.97 00:15:57.079 lat (usec): min=123, max=1018, avg=153.39, stdev=25.76 00:15:57.079 clat percentiles (usec): 00:15:57.079 | 1.00th=[ 106], 5.00th=[ 112], 10.00th=[ 115], 20.00th=[ 119], 00:15:57.079 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 126], 60.00th=[ 129], 00:15:57.079 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 147], 95.00th=[ 165], 00:15:57.079 | 99.00th=[ 202], 99.50th=[ 225], 99.90th=[ 262], 99.95th=[ 338], 00:15:57.079 | 99.99th=[ 996] 00:15:57.079 bw ( KiB/s): min=12288, max=12288, per=40.04%, avg=12288.00, stdev= 0.00, samples=1 00:15:57.079 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:15:57.079 lat (usec) : 250=99.85%, 500=0.14%, 1000=0.02% 00:15:57.079 cpu : usr=1.80%, sys=8.60%, ctx=5897, majf=0, minf=15 00:15:57.079 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:57.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.079 issued rwts: total=2825,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.079 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:57.079 job1: (groupid=0, jobs=1): err= 0: pid=75202: Wed Feb 14 19:16:34 2024 00:15:57.079 read: IOPS=1342, BW=5371KiB/s (5500kB/s)(5376KiB/1001msec) 00:15:57.079 slat (nsec): min=17065, max=64074, avg=25209.13, stdev=5068.57 00:15:57.079 clat (usec): min=178, max=3846, avg=362.71, stdev=97.35 00:15:57.079 lat (usec): min=216, max=3872, avg=387.92, stdev=97.43 00:15:57.079 clat percentiles (usec): 00:15:57.079 | 1.00th=[ 326], 5.00th=[ 334], 10.00th=[ 338], 20.00th=[ 343], 00:15:57.079 | 30.00th=[ 351], 40.00th=[ 355], 50.00th=[ 359], 60.00th=[ 363], 00:15:57.079 | 70.00th=[ 371], 80.00th=[ 375], 90.00th=[ 383], 95.00th=[ 392], 00:15:57.079 | 99.00th=[ 416], 99.50th=[ 441], 99.90th=[ 594], 99.95th=[ 3851], 00:15:57.079 | 99.99th=[ 3851] 00:15:57.079 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:57.079 slat (nsec): min=27778, max=98929, avg=39887.53, stdev=5709.77 00:15:57.079 clat (usec): min=127, max=478, avg=266.45, stdev=28.33 00:15:57.079 lat (usec): min=214, max=577, avg=306.33, stdev=28.29 00:15:57.079 clat percentiles (usec): 00:15:57.079 | 1.00th=[ 194], 5.00th=[ 229], 10.00th=[ 243], 20.00th=[ 249], 00:15:57.079 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:15:57.079 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 314], 00:15:57.079 | 99.00th=[ 363], 99.50th=[ 375], 99.90th=[ 429], 99.95th=[ 478], 00:15:57.079 | 99.99th=[ 478] 00:15:57.079 bw ( KiB/s): min= 7776, max= 7776, per=25.34%, avg=7776.00, stdev= 0.00, samples=1 00:15:57.079 iops : min= 1944, max= 1944, avg=1944.00, stdev= 0.00, samples=1 00:15:57.079 lat (usec) : 250=11.32%, 500=88.61%, 750=0.03% 00:15:57.079 lat (msec) : 4=0.03% 00:15:57.079 cpu : usr=1.40%, sys=7.40%, ctx=2895, majf=0, minf=5 00:15:57.079 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:57.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.079 issued rwts: total=1344,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.079 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:57.079 job2: (groupid=0, jobs=1): err= 0: pid=75204: Wed Feb 14 19:16:34 2024 00:15:57.079 read: IOPS=1355, BW=5423KiB/s (5553kB/s)(5428KiB/1001msec) 00:15:57.079 slat (nsec): min=11800, max=38382, avg=15453.69, stdev=2782.12 00:15:57.079 clat (usec): min=250, max=579, avg=370.04, stdev=25.48 00:15:57.079 lat (usec): min=268, max=597, avg=385.50, stdev=25.74 00:15:57.079 clat percentiles (usec): 00:15:57.079 | 1.00th=[ 297], 5.00th=[ 343], 10.00th=[ 347], 20.00th=[ 355], 00:15:57.079 | 30.00th=[ 359], 40.00th=[ 363], 50.00th=[ 367], 60.00th=[ 375], 00:15:57.079 | 70.00th=[ 379], 80.00th=[ 383], 90.00th=[ 396], 95.00th=[ 404], 00:15:57.079 | 99.00th=[ 478], 99.50th=[ 502], 99.90th=[ 537], 99.95th=[ 578], 00:15:57.079 | 99.99th=[ 578] 00:15:57.079 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:57.079 slat (usec): min=18, max=109, avg=32.24, stdev= 5.62 00:15:57.079 clat (usec): min=138, max=443, avg=274.35, stdev=30.52 00:15:57.079 lat (usec): min=168, max=478, avg=306.59, stdev=29.45 00:15:57.079 clat percentiles (usec): 00:15:57.079 | 1.00th=[ 190], 5.00th=[ 231], 10.00th=[ 251], 20.00th=[ 260], 00:15:57.079 | 30.00th=[ 265], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 277], 00:15:57.079 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 322], 00:15:57.079 | 99.00th=[ 379], 99.50th=[ 408], 99.90th=[ 445], 99.95th=[ 445], 00:15:57.079 | 99.99th=[ 445] 00:15:57.079 bw ( KiB/s): min= 7784, max= 7784, per=25.36%, avg=7784.00, stdev= 0.00, samples=1 00:15:57.079 iops : min= 1946, max= 1946, avg=1946.00, stdev= 0.00, samples=1 00:15:57.079 lat (usec) : 250=4.80%, 500=94.95%, 750=0.24% 00:15:57.079 cpu : usr=1.50%, sys=5.40%, ctx=2893, majf=0, minf=3 00:15:57.079 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:57.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.079 issued rwts: total=1357,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.079 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:57.079 job3: (groupid=0, jobs=1): err= 0: pid=75209: Wed Feb 14 19:16:34 2024 00:15:57.079 read: IOPS=1361, BW=5447KiB/s (5577kB/s)(5452KiB/1001msec) 00:15:57.079 slat (nsec): min=13356, max=62696, avg=20529.74, stdev=4650.83 00:15:57.079 clat (usec): min=228, max=956, avg=366.35, stdev=33.15 00:15:57.079 lat (usec): min=276, max=1013, avg=386.88, stdev=33.38 00:15:57.079 clat percentiles (usec): 00:15:57.079 | 1.00th=[ 302], 5.00th=[ 334], 10.00th=[ 343], 20.00th=[ 351], 00:15:57.079 | 30.00th=[ 355], 40.00th=[ 359], 50.00th=[ 363], 60.00th=[ 371], 00:15:57.079 | 70.00th=[ 375], 80.00th=[ 379], 90.00th=[ 392], 95.00th=[ 400], 00:15:57.079 | 99.00th=[ 510], 99.50th=[ 537], 99.90th=[ 676], 99.95th=[ 955], 00:15:57.079 | 99.99th=[ 955] 00:15:57.079 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:57.079 slat (nsec): min=17386, max=61375, avg=24670.75, stdev=5266.35 00:15:57.079 clat (usec): min=175, max=3440, avg=279.60, stdev=121.33 00:15:57.079 lat (usec): min=205, max=3470, avg=304.27, stdev=121.18 00:15:57.079 clat percentiles (usec): 00:15:57.079 | 1.00th=[ 192], 5.00th=[ 208], 10.00th=[ 223], 20.00th=[ 262], 00:15:57.079 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:15:57.079 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 310], 95.00th=[ 318], 00:15:57.079 | 99.00th=[ 343], 99.50th=[ 367], 99.90th=[ 2737], 99.95th=[ 3425], 00:15:57.079 | 99.99th=[ 3425] 00:15:57.079 bw ( KiB/s): min= 4408, max= 7895, per=20.04%, avg=6151.50, stdev=2465.68, samples=2 00:15:57.079 iops : min= 1102, max= 1973, avg=1537.50, stdev=615.89, samples=2 00:15:57.079 lat (usec) : 250=9.07%, 500=90.20%, 750=0.55%, 1000=0.03% 00:15:57.079 lat (msec) : 2=0.03%, 4=0.10% 00:15:57.079 cpu : usr=0.90%, sys=5.80%, ctx=2903, majf=0, minf=12 00:15:57.079 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:57.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.079 issued rwts: total=1363,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.079 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:57.079 00:15:57.079 Run status group 0 (all jobs): 00:15:57.079 READ: bw=26.9MiB/s (28.2MB/s), 5371KiB/s-11.0MiB/s (5500kB/s-11.6MB/s), io=26.9MiB (28.2MB), run=1001-1001msec 00:15:57.079 WRITE: bw=30.0MiB/s (31.4MB/s), 6138KiB/s-12.0MiB/s (6285kB/s-12.6MB/s), io=30.0MiB (31.5MB), run=1001-1001msec 00:15:57.079 00:15:57.079 Disk stats (read/write): 00:15:57.079 nvme0n1: ios=2542/2560, merge=0/0, ticks=480/373, in_queue=853, util=88.77% 00:15:57.079 nvme0n2: ios=1071/1505, merge=0/0, ticks=421/423, in_queue=844, util=89.35% 00:15:57.079 nvme0n3: ios=1024/1505, merge=0/0, ticks=355/440, in_queue=795, util=89.23% 00:15:57.079 nvme0n4: ios=1024/1514, merge=0/0, ticks=377/384, in_queue=761, util=88.85% 00:15:57.079 19:16:34 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:57.079 [global] 00:15:57.079 thread=1 00:15:57.079 invalidate=1 00:15:57.079 rw=randwrite 00:15:57.079 time_based=1 00:15:57.079 runtime=1 00:15:57.079 ioengine=libaio 00:15:57.079 direct=1 00:15:57.079 bs=4096 00:15:57.079 iodepth=1 00:15:57.079 norandommap=0 00:15:57.079 numjobs=1 00:15:57.079 00:15:57.079 verify_dump=1 00:15:57.079 verify_backlog=512 00:15:57.079 verify_state_save=0 00:15:57.079 do_verify=1 00:15:57.079 verify=crc32c-intel 00:15:57.079 [job0] 00:15:57.079 filename=/dev/nvme0n1 00:15:57.079 [job1] 00:15:57.079 filename=/dev/nvme0n2 00:15:57.079 [job2] 00:15:57.079 filename=/dev/nvme0n3 00:15:57.079 [job3] 00:15:57.079 filename=/dev/nvme0n4 00:15:57.079 Could not set queue depth (nvme0n1) 00:15:57.079 Could not set queue depth (nvme0n2) 00:15:57.079 Could not set queue depth (nvme0n3) 00:15:57.079 Could not set queue depth (nvme0n4) 00:15:57.079 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:57.079 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:57.079 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:57.079 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:57.079 fio-3.35 00:15:57.079 Starting 4 threads 00:15:58.456 00:15:58.456 job0: (groupid=0, jobs=1): err= 0: pid=75267: Wed Feb 14 19:16:35 2024 00:15:58.456 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:15:58.456 slat (nsec): min=14000, max=72743, avg=17250.47, stdev=3324.94 00:15:58.456 clat (usec): min=135, max=325, avg=229.94, stdev=29.52 00:15:58.456 lat (usec): min=151, max=341, avg=247.19, stdev=29.57 00:15:58.456 clat percentiles (usec): 00:15:58.456 | 1.00th=[ 163], 5.00th=[ 182], 10.00th=[ 192], 20.00th=[ 206], 00:15:58.456 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 239], 00:15:58.456 | 70.00th=[ 245], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 281], 00:15:58.456 | 99.00th=[ 297], 99.50th=[ 310], 99.90th=[ 322], 99.95th=[ 322], 00:15:58.456 | 99.99th=[ 326] 00:15:58.456 write: IOPS=2208, BW=8835KiB/s (9047kB/s)(8844KiB/1001msec); 0 zone resets 00:15:58.456 slat (usec): min=20, max=138, avg=25.21, stdev= 5.00 00:15:58.456 clat (usec): min=105, max=1853, avg=194.57, stdev=43.78 00:15:58.456 lat (usec): min=134, max=1875, avg=219.79, stdev=44.03 00:15:58.456 clat percentiles (usec): 00:15:58.456 | 1.00th=[ 127], 5.00th=[ 155], 10.00th=[ 163], 20.00th=[ 174], 00:15:58.456 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 200], 00:15:58.456 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 225], 95.00th=[ 239], 00:15:58.456 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 297], 99.95th=[ 326], 00:15:58.456 | 99.99th=[ 1860] 00:15:58.456 bw ( KiB/s): min= 8848, max= 8848, per=25.06%, avg=8848.00, stdev= 0.00, samples=1 00:15:58.456 iops : min= 2212, max= 2212, avg=2212.00, stdev= 0.00, samples=1 00:15:58.456 lat (usec) : 250=87.32%, 500=12.66% 00:15:58.456 lat (msec) : 2=0.02% 00:15:58.456 cpu : usr=2.00%, sys=6.20%, ctx=4261, majf=0, minf=10 00:15:58.456 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:58.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.456 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.456 issued rwts: total=2048,2211,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.456 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:58.456 job1: (groupid=0, jobs=1): err= 0: pid=75268: Wed Feb 14 19:16:35 2024 00:15:58.456 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:15:58.456 slat (nsec): min=14760, max=37254, avg=18063.32, stdev=2424.42 00:15:58.456 clat (usec): min=142, max=464, avg=230.10, stdev=29.62 00:15:58.456 lat (usec): min=161, max=485, avg=248.16, stdev=29.68 00:15:58.456 clat percentiles (usec): 00:15:58.456 | 1.00th=[ 167], 5.00th=[ 180], 10.00th=[ 194], 20.00th=[ 208], 00:15:58.456 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 237], 00:15:58.456 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 281], 00:15:58.456 | 99.00th=[ 302], 99.50th=[ 310], 99.90th=[ 363], 99.95th=[ 363], 00:15:58.456 | 99.99th=[ 465] 00:15:58.456 write: IOPS=2197, BW=8791KiB/s (9002kB/s)(8800KiB/1001msec); 0 zone resets 00:15:58.456 slat (usec): min=21, max=139, avg=26.72, stdev= 5.00 00:15:58.456 clat (usec): min=109, max=486, avg=193.13, stdev=26.39 00:15:58.456 lat (usec): min=137, max=512, avg=219.84, stdev=26.81 00:15:58.456 clat percentiles (usec): 00:15:58.456 | 1.00th=[ 137], 5.00th=[ 153], 10.00th=[ 163], 20.00th=[ 174], 00:15:58.456 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 198], 00:15:58.456 | 70.00th=[ 204], 80.00th=[ 212], 90.00th=[ 223], 95.00th=[ 239], 00:15:58.456 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 326], 99.95th=[ 437], 00:15:58.456 | 99.99th=[ 486] 00:15:58.456 bw ( KiB/s): min= 8760, max= 8760, per=24.82%, avg=8760.00, stdev= 0.00, samples=1 00:15:58.456 iops : min= 2190, max= 2190, avg=2190.00, stdev= 0.00, samples=1 00:15:58.456 lat (usec) : 250=87.64%, 500=12.36% 00:15:58.456 cpu : usr=1.80%, sys=6.90%, ctx=4248, majf=0, minf=11 00:15:58.456 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:58.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.456 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.456 issued rwts: total=2048,2200,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.456 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:58.456 job2: (groupid=0, jobs=1): err= 0: pid=75269: Wed Feb 14 19:16:35 2024 00:15:58.456 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:15:58.456 slat (nsec): min=13437, max=47588, avg=16404.19, stdev=3060.67 00:15:58.456 clat (usec): min=160, max=561, avg=226.18, stdev=25.14 00:15:58.456 lat (usec): min=177, max=576, avg=242.59, stdev=25.21 00:15:58.456 clat percentiles (usec): 00:15:58.456 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 206], 00:15:58.456 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 229], 00:15:58.456 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 255], 95.00th=[ 269], 00:15:58.456 | 99.00th=[ 297], 99.50th=[ 302], 99.90th=[ 396], 99.95th=[ 396], 00:15:58.456 | 99.99th=[ 562] 00:15:58.456 write: IOPS=2238, BW=8955KiB/s (9170kB/s)(8964KiB/1001msec); 0 zone resets 00:15:58.456 slat (usec): min=19, max=139, avg=24.31, stdev= 6.05 00:15:58.456 clat (usec): min=142, max=1318, avg=196.66, stdev=35.48 00:15:58.456 lat (usec): min=163, max=1338, avg=220.97, stdev=36.48 00:15:58.456 clat percentiles (usec): 00:15:58.457 | 1.00th=[ 161], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 180], 00:15:58.457 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 198], 00:15:58.457 | 70.00th=[ 202], 80.00th=[ 210], 90.00th=[ 223], 95.00th=[ 239], 00:15:58.457 | 99.00th=[ 265], 99.50th=[ 285], 99.90th=[ 529], 99.95th=[ 668], 00:15:58.457 | 99.99th=[ 1319] 00:15:58.457 bw ( KiB/s): min= 8968, max= 8968, per=25.40%, avg=8968.00, stdev= 0.00, samples=1 00:15:58.457 iops : min= 2242, max= 2242, avg=2242.00, stdev= 0.00, samples=1 00:15:58.457 lat (usec) : 250=92.21%, 500=7.69%, 750=0.07% 00:15:58.457 lat (msec) : 2=0.02% 00:15:58.457 cpu : usr=1.50%, sys=6.70%, ctx=4289, majf=0, minf=13 00:15:58.457 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:58.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.457 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.457 issued rwts: total=2048,2241,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.457 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:58.457 job3: (groupid=0, jobs=1): err= 0: pid=75270: Wed Feb 14 19:16:35 2024 00:15:58.457 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:15:58.457 slat (usec): min=13, max=278, avg=15.99, stdev= 6.54 00:15:58.457 clat (usec): min=3, max=1341, avg=232.07, stdev=38.77 00:15:58.457 lat (usec): min=164, max=1354, avg=248.05, stdev=38.56 00:15:58.457 clat percentiles (usec): 00:15:58.457 | 1.00th=[ 174], 5.00th=[ 188], 10.00th=[ 196], 20.00th=[ 208], 00:15:58.457 | 30.00th=[ 217], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 239], 00:15:58.457 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 269], 95.00th=[ 281], 00:15:58.457 | 99.00th=[ 306], 99.50th=[ 310], 99.90th=[ 449], 99.95th=[ 627], 00:15:58.457 | 99.99th=[ 1336] 00:15:58.457 write: IOPS=2179, BW=8719KiB/s (8929kB/s)(8728KiB/1001msec); 0 zone resets 00:15:58.457 slat (usec): min=19, max=199, avg=23.70, stdev= 5.74 00:15:58.457 clat (usec): min=124, max=3395, avg=198.20, stdev=88.83 00:15:58.457 lat (usec): min=145, max=3443, avg=221.89, stdev=89.37 00:15:58.457 clat percentiles (usec): 00:15:58.457 | 1.00th=[ 145], 5.00th=[ 159], 10.00th=[ 167], 20.00th=[ 178], 00:15:58.457 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 200], 00:15:58.457 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 223], 95.00th=[ 237], 00:15:58.457 | 99.00th=[ 260], 99.50th=[ 277], 99.90th=[ 693], 99.95th=[ 2507], 00:15:58.457 | 99.99th=[ 3392] 00:15:58.457 bw ( KiB/s): min= 8824, max= 8824, per=25.00%, avg=8824.00, stdev= 0.00, samples=1 00:15:58.457 iops : min= 2206, max= 2206, avg=2206.00, stdev= 0.00, samples=1 00:15:58.457 lat (usec) : 4=0.02%, 250=87.47%, 500=12.36%, 750=0.07% 00:15:58.457 lat (msec) : 2=0.02%, 4=0.05% 00:15:58.457 cpu : usr=1.10%, sys=6.70%, ctx=4230, majf=0, minf=11 00:15:58.457 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:58.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.457 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.457 issued rwts: total=2048,2182,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.457 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:58.457 00:15:58.457 Run status group 0 (all jobs): 00:15:58.457 READ: bw=32.0MiB/s (33.5MB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:15:58.457 WRITE: bw=34.5MiB/s (36.1MB/s), 8719KiB/s-8955KiB/s (8929kB/s-9170kB/s), io=34.5MiB (36.2MB), run=1001-1001msec 00:15:58.457 00:15:58.457 Disk stats (read/write): 00:15:58.457 nvme0n1: ios=1689/2048, merge=0/0, ticks=420/431, in_queue=851, util=88.87% 00:15:58.457 nvme0n2: ios=1681/2048, merge=0/0, ticks=428/426, in_queue=854, util=90.38% 00:15:58.457 nvme0n3: ios=1674/2048, merge=0/0, ticks=405/417, in_queue=822, util=89.38% 00:15:58.457 nvme0n4: ios=1635/2048, merge=0/0, ticks=382/425, in_queue=807, util=89.52% 00:15:58.457 19:16:35 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:58.457 [global] 00:15:58.457 thread=1 00:15:58.457 invalidate=1 00:15:58.457 rw=write 00:15:58.457 time_based=1 00:15:58.457 runtime=1 00:15:58.457 ioengine=libaio 00:15:58.457 direct=1 00:15:58.457 bs=4096 00:15:58.457 iodepth=128 00:15:58.457 norandommap=0 00:15:58.457 numjobs=1 00:15:58.457 00:15:58.457 verify_dump=1 00:15:58.457 verify_backlog=512 00:15:58.457 verify_state_save=0 00:15:58.457 do_verify=1 00:15:58.457 verify=crc32c-intel 00:15:58.457 [job0] 00:15:58.457 filename=/dev/nvme0n1 00:15:58.457 [job1] 00:15:58.457 filename=/dev/nvme0n2 00:15:58.457 [job2] 00:15:58.457 filename=/dev/nvme0n3 00:15:58.457 [job3] 00:15:58.457 filename=/dev/nvme0n4 00:15:58.457 Could not set queue depth (nvme0n1) 00:15:58.457 Could not set queue depth (nvme0n2) 00:15:58.457 Could not set queue depth (nvme0n3) 00:15:58.457 Could not set queue depth (nvme0n4) 00:15:58.457 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:58.457 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:58.457 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:58.457 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:58.457 fio-3.35 00:15:58.457 Starting 4 threads 00:15:59.833 00:15:59.833 job0: (groupid=0, jobs=1): err= 0: pid=75325: Wed Feb 14 19:16:36 2024 00:15:59.833 read: IOPS=4590, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:15:59.833 slat (usec): min=6, max=3443, avg=101.63, stdev=453.77 00:15:59.833 clat (usec): min=428, max=16009, avg=13299.91, stdev=1374.56 00:15:59.833 lat (usec): min=3273, max=16036, avg=13401.54, stdev=1309.62 00:15:59.833 clat percentiles (usec): 00:15:59.833 | 1.00th=[ 6915], 5.00th=[10945], 10.00th=[11994], 20.00th=[12911], 00:15:59.833 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13435], 60.00th=[13566], 00:15:59.833 | 70.00th=[13829], 80.00th=[14091], 90.00th=[14353], 95.00th=[15008], 00:15:59.833 | 99.00th=[15795], 99.50th=[15926], 99.90th=[15926], 99.95th=[16057], 00:15:59.833 | 99.99th=[16057] 00:15:59.833 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:15:59.833 slat (usec): min=12, max=3789, avg=106.63, stdev=427.45 00:15:59.833 clat (usec): min=10565, max=17034, avg=14148.91, stdev=1349.90 00:15:59.833 lat (usec): min=10604, max=17056, avg=14255.54, stdev=1323.37 00:15:59.833 clat percentiles (usec): 00:15:59.833 | 1.00th=[11207], 5.00th=[11600], 10.00th=[11994], 20.00th=[12780], 00:15:59.833 | 30.00th=[13698], 40.00th=[14222], 50.00th=[14484], 60.00th=[14615], 00:15:59.833 | 70.00th=[14877], 80.00th=[15139], 90.00th=[15795], 95.00th=[16057], 00:15:59.833 | 99.00th=[16712], 99.50th=[16909], 99.90th=[16909], 99.95th=[16909], 00:15:59.833 | 99.99th=[16909] 00:15:59.833 bw ( KiB/s): min=17432, max=19432, per=36.54%, avg=18432.00, stdev=1414.21, samples=2 00:15:59.833 iops : min= 4358, max= 4858, avg=4608.00, stdev=353.55, samples=2 00:15:59.833 lat (usec) : 500=0.01% 00:15:59.833 lat (msec) : 4=0.30%, 10=0.39%, 20=99.29% 00:15:59.833 cpu : usr=4.40%, sys=14.39%, ctx=682, majf=0, minf=1 00:15:59.833 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:15:59.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:59.833 issued rwts: total=4600,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:59.833 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:59.833 job1: (groupid=0, jobs=1): err= 0: pid=75326: Wed Feb 14 19:16:36 2024 00:15:59.833 read: IOPS=1658, BW=6633KiB/s (6793kB/s)(6660KiB/1004msec) 00:15:59.833 slat (usec): min=6, max=14887, avg=288.45, stdev=1389.38 00:15:59.833 clat (usec): min=674, max=56380, avg=33186.90, stdev=7354.37 00:15:59.833 lat (usec): min=6866, max=56420, avg=33475.35, stdev=7428.14 00:15:59.833 clat percentiles (usec): 00:15:59.833 | 1.00th=[ 7177], 5.00th=[23200], 10.00th=[26346], 20.00th=[28705], 00:15:59.833 | 30.00th=[29754], 40.00th=[30278], 50.00th=[32113], 60.00th=[34866], 00:15:59.833 | 70.00th=[35914], 80.00th=[40633], 90.00th=[41681], 95.00th=[44827], 00:15:59.833 | 99.00th=[49546], 99.50th=[50594], 99.90th=[51119], 99.95th=[56361], 00:15:59.833 | 99.99th=[56361] 00:15:59.833 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:15:59.833 slat (usec): min=13, max=10161, avg=246.16, stdev=1081.06 00:15:59.833 clat (usec): min=20944, max=60237, avg=34489.93, stdev=7792.22 00:15:59.833 lat (usec): min=20980, max=60272, avg=34736.08, stdev=7851.03 00:15:59.833 clat percentiles (usec): 00:15:59.833 | 1.00th=[23462], 5.00th=[23725], 10.00th=[24249], 20.00th=[25560], 00:15:59.833 | 30.00th=[29754], 40.00th=[31851], 50.00th=[35390], 60.00th=[35914], 00:15:59.833 | 70.00th=[38011], 80.00th=[40633], 90.00th=[44303], 95.00th=[47973], 00:15:59.833 | 99.00th=[58459], 99.50th=[60031], 99.90th=[60031], 99.95th=[60031], 00:15:59.833 | 99.99th=[60031] 00:15:59.833 bw ( KiB/s): min= 8192, max= 8192, per=16.24%, avg=8192.00, stdev= 0.00, samples=2 00:15:59.833 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:15:59.833 lat (usec) : 750=0.03% 00:15:59.833 lat (msec) : 10=0.83%, 20=1.08%, 50=95.80%, 100=2.26% 00:15:59.833 cpu : usr=2.29%, sys=6.38%, ctx=268, majf=0, minf=3 00:15:59.833 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:15:59.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:59.833 issued rwts: total=1665,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:59.833 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:59.833 job2: (groupid=0, jobs=1): err= 0: pid=75327: Wed Feb 14 19:16:36 2024 00:15:59.833 read: IOPS=3954, BW=15.4MiB/s (16.2MB/s)(15.5MiB/1003msec) 00:15:59.833 slat (usec): min=7, max=3912, avg=118.43, stdev=518.95 00:15:59.833 clat (usec): min=1834, max=18836, avg=15422.56, stdev=1833.37 00:15:59.833 lat (usec): min=1851, max=19042, avg=15540.99, stdev=1780.58 00:15:59.833 clat percentiles (usec): 00:15:59.833 | 1.00th=[ 5932], 5.00th=[12649], 10.00th=[13435], 20.00th=[14484], 00:15:59.833 | 30.00th=[15139], 40.00th=[15533], 50.00th=[15795], 60.00th=[16057], 00:15:59.833 | 70.00th=[16319], 80.00th=[16581], 90.00th=[16909], 95.00th=[17171], 00:15:59.833 | 99.00th=[17957], 99.50th=[18220], 99.90th=[18220], 99.95th=[18744], 00:15:59.833 | 99.99th=[18744] 00:15:59.833 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:15:59.833 slat (usec): min=12, max=4514, avg=120.37, stdev=485.31 00:15:59.833 clat (usec): min=11696, max=19163, avg=15915.24, stdev=1500.51 00:15:59.833 lat (usec): min=12049, max=19199, avg=16035.61, stdev=1474.64 00:15:59.833 clat percentiles (usec): 00:15:59.833 | 1.00th=[12649], 5.00th=[13304], 10.00th=[13829], 20.00th=[14484], 00:15:59.833 | 30.00th=[15270], 40.00th=[15664], 50.00th=[16057], 60.00th=[16450], 00:15:59.833 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17695], 95.00th=[18220], 00:15:59.833 | 99.00th=[19006], 99.50th=[19006], 99.90th=[19006], 99.95th=[19268], 00:15:59.833 | 99.99th=[19268] 00:15:59.833 bw ( KiB/s): min=16384, max=16384, per=32.48%, avg=16384.00, stdev= 0.00, samples=2 00:15:59.833 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:15:59.833 lat (msec) : 2=0.10%, 4=0.09%, 10=0.79%, 20=99.02% 00:15:59.833 cpu : usr=3.99%, sys=13.57%, ctx=662, majf=0, minf=1 00:15:59.833 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:59.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:59.833 issued rwts: total=3966,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:59.833 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:59.833 job3: (groupid=0, jobs=1): err= 0: pid=75328: Wed Feb 14 19:16:36 2024 00:15:59.833 read: IOPS=1529, BW=6120KiB/s (6266kB/s)(6144KiB/1004msec) 00:15:59.833 slat (usec): min=8, max=15536, avg=307.10, stdev=1469.40 00:15:59.833 clat (usec): min=22490, max=65728, avg=39495.39, stdev=8213.00 00:15:59.833 lat (usec): min=28772, max=65758, avg=39802.49, stdev=8159.18 00:15:59.833 clat percentiles (usec): 00:15:59.833 | 1.00th=[27395], 5.00th=[30802], 10.00th=[31589], 20.00th=[33817], 00:15:59.833 | 30.00th=[35390], 40.00th=[35914], 50.00th=[36439], 60.00th=[39060], 00:15:59.833 | 70.00th=[40109], 80.00th=[42730], 90.00th=[52167], 95.00th=[58459], 00:15:59.833 | 99.00th=[64226], 99.50th=[64226], 99.90th=[65799], 99.95th=[65799], 00:15:59.833 | 99.99th=[65799] 00:15:59.833 write: IOPS=1903, BW=7614KiB/s (7796kB/s)(7644KiB/1004msec); 0 zone resets 00:15:59.833 slat (usec): min=13, max=10133, avg=269.22, stdev=1273.86 00:15:59.833 clat (usec): min=384, max=42680, avg=34201.32, stdev=5961.62 00:15:59.833 lat (usec): min=7396, max=44593, avg=34470.53, stdev=5862.38 00:15:59.833 clat percentiles (usec): 00:15:59.833 | 1.00th=[ 7963], 5.00th=[26608], 10.00th=[28443], 20.00th=[31327], 00:15:59.833 | 30.00th=[32375], 40.00th=[33424], 50.00th=[35390], 60.00th=[35914], 00:15:59.833 | 70.00th=[36963], 80.00th=[39060], 90.00th=[40633], 95.00th=[41157], 00:15:59.833 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:15:59.833 | 99.99th=[42730] 00:15:59.833 bw ( KiB/s): min= 6320, max= 7928, per=14.12%, avg=7124.00, stdev=1137.03, samples=2 00:15:59.833 iops : min= 1580, max= 1982, avg=1781.00, stdev=284.26, samples=2 00:15:59.833 lat (usec) : 500=0.03% 00:15:59.833 lat (msec) : 10=0.93%, 20=1.13%, 50=92.69%, 100=5.22% 00:15:59.833 cpu : usr=1.99%, sys=6.08%, ctx=199, majf=0, minf=10 00:15:59.833 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:15:59.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:59.833 issued rwts: total=1536,1911,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:59.833 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:59.833 00:15:59.833 Run status group 0 (all jobs): 00:15:59.833 READ: bw=45.8MiB/s (48.0MB/s), 6120KiB/s-17.9MiB/s (6266kB/s-18.8MB/s), io=46.0MiB (48.2MB), run=1002-1004msec 00:15:59.833 WRITE: bw=49.3MiB/s (51.7MB/s), 7614KiB/s-18.0MiB/s (7796kB/s-18.8MB/s), io=49.5MiB (51.9MB), run=1002-1004msec 00:15:59.833 00:15:59.833 Disk stats (read/write): 00:15:59.833 nvme0n1: ios=3829/4096, merge=0/0, ticks=11954/12626, in_queue=24580, util=88.34% 00:15:59.833 nvme0n2: ios=1574/1695, merge=0/0, ticks=17593/16623, in_queue=34216, util=88.53% 00:15:59.833 nvme0n3: ios=3298/3584, merge=0/0, ticks=12178/12628, in_queue=24806, util=89.10% 00:15:59.833 nvme0n4: ios=1376/1536, merge=0/0, ticks=13479/12237, in_queue=25716, util=89.55% 00:15:59.833 19:16:36 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:59.833 [global] 00:15:59.833 thread=1 00:15:59.833 invalidate=1 00:15:59.833 rw=randwrite 00:15:59.833 time_based=1 00:15:59.833 runtime=1 00:15:59.833 ioengine=libaio 00:15:59.833 direct=1 00:15:59.833 bs=4096 00:15:59.833 iodepth=128 00:15:59.833 norandommap=0 00:15:59.833 numjobs=1 00:15:59.833 00:15:59.833 verify_dump=1 00:15:59.833 verify_backlog=512 00:15:59.833 verify_state_save=0 00:15:59.833 do_verify=1 00:15:59.833 verify=crc32c-intel 00:15:59.833 [job0] 00:15:59.833 filename=/dev/nvme0n1 00:15:59.833 [job1] 00:15:59.833 filename=/dev/nvme0n2 00:15:59.834 [job2] 00:15:59.834 filename=/dev/nvme0n3 00:15:59.834 [job3] 00:15:59.834 filename=/dev/nvme0n4 00:15:59.834 Could not set queue depth (nvme0n1) 00:15:59.834 Could not set queue depth (nvme0n2) 00:15:59.834 Could not set queue depth (nvme0n3) 00:15:59.834 Could not set queue depth (nvme0n4) 00:15:59.834 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:59.834 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:59.834 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:59.834 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:59.834 fio-3.35 00:15:59.834 Starting 4 threads 00:16:01.219 00:16:01.219 job0: (groupid=0, jobs=1): err= 0: pid=75387: Wed Feb 14 19:16:38 2024 00:16:01.219 read: IOPS=2055, BW=8223KiB/s (8420kB/s)(8256KiB/1004msec) 00:16:01.219 slat (usec): min=9, max=10596, avg=211.11, stdev=1049.55 00:16:01.219 clat (usec): min=2987, max=40313, avg=27220.72, stdev=3911.27 00:16:01.219 lat (usec): min=6721, max=43213, avg=27431.83, stdev=3823.69 00:16:01.219 clat percentiles (usec): 00:16:01.219 | 1.00th=[19530], 5.00th=[24511], 10.00th=[24773], 20.00th=[25297], 00:16:01.219 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26084], 60.00th=[26608], 00:16:01.219 | 70.00th=[26870], 80.00th=[29492], 90.00th=[32375], 95.00th=[35390], 00:16:01.219 | 99.00th=[38011], 99.50th=[39584], 99.90th=[40109], 99.95th=[40109], 00:16:01.219 | 99.99th=[40109] 00:16:01.219 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:16:01.219 slat (usec): min=17, max=6614, avg=210.96, stdev=866.05 00:16:01.219 clat (usec): min=7850, max=42589, avg=27398.56, stdev=4626.38 00:16:01.219 lat (usec): min=8744, max=42620, avg=27609.52, stdev=4569.74 00:16:01.219 clat percentiles (usec): 00:16:01.219 | 1.00th=[14091], 5.00th=[22414], 10.00th=[25035], 20.00th=[25560], 00:16:01.219 | 30.00th=[25822], 40.00th=[26346], 50.00th=[26608], 60.00th=[26870], 00:16:01.219 | 70.00th=[27395], 80.00th=[28181], 90.00th=[30278], 95.00th=[41157], 00:16:01.219 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:16:01.219 | 99.99th=[42730] 00:16:01.219 bw ( KiB/s): min= 9120, max=10472, per=19.59%, avg=9796.00, stdev=956.01, samples=2 00:16:01.219 iops : min= 2280, max= 2618, avg=2449.00, stdev=239.00, samples=2 00:16:01.219 lat (msec) : 4=0.02%, 10=0.65%, 20=1.02%, 50=98.31% 00:16:01.219 cpu : usr=3.29%, sys=7.88%, ctx=316, majf=0, minf=19 00:16:01.219 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:16:01.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:01.219 issued rwts: total=2064,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.219 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:01.219 job1: (groupid=0, jobs=1): err= 0: pid=75388: Wed Feb 14 19:16:38 2024 00:16:01.219 read: IOPS=2518, BW=9.84MiB/s (10.3MB/s)(9.88MiB/1004msec) 00:16:01.219 slat (usec): min=7, max=9525, avg=199.42, stdev=949.36 00:16:01.219 clat (usec): min=940, max=29625, avg=24986.33, stdev=2927.29 00:16:01.219 lat (usec): min=3489, max=29644, avg=25185.75, stdev=2784.92 00:16:01.219 clat percentiles (usec): 00:16:01.219 | 1.00th=[ 8225], 5.00th=[20579], 10.00th=[22938], 20.00th=[24249], 00:16:01.219 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25560], 60.00th=[25822], 00:16:01.219 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26870], 95.00th=[27657], 00:16:01.219 | 99.00th=[29492], 99.50th=[29492], 99.90th=[29492], 99.95th=[29754], 00:16:01.219 | 99.99th=[29754] 00:16:01.219 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:16:01.219 slat (usec): min=13, max=6823, avg=183.91, stdev=873.89 00:16:01.219 clat (usec): min=14441, max=31908, avg=24560.15, stdev=3269.76 00:16:01.219 lat (usec): min=15020, max=31940, avg=24744.06, stdev=3177.39 00:16:01.219 clat percentiles (usec): 00:16:01.219 | 1.00th=[16712], 5.00th=[19792], 10.00th=[20317], 20.00th=[21365], 00:16:01.219 | 30.00th=[21890], 40.00th=[24249], 50.00th=[25560], 60.00th=[25822], 00:16:01.219 | 70.00th=[26346], 80.00th=[26870], 90.00th=[28181], 95.00th=[29754], 00:16:01.219 | 99.00th=[31851], 99.50th=[31851], 99.90th=[31851], 99.95th=[31851], 00:16:01.219 | 99.99th=[31851] 00:16:01.219 bw ( KiB/s): min= 9976, max=10504, per=20.48%, avg=10240.00, stdev=373.35, samples=2 00:16:01.219 iops : min= 2494, max= 2626, avg=2560.00, stdev=93.34, samples=2 00:16:01.219 lat (usec) : 1000=0.02% 00:16:01.219 lat (msec) : 4=0.02%, 10=0.61%, 20=4.78%, 50=94.58% 00:16:01.219 cpu : usr=2.69%, sys=8.87%, ctx=212, majf=0, minf=9 00:16:01.219 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:01.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:01.219 issued rwts: total=2529,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.219 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:01.219 job2: (groupid=0, jobs=1): err= 0: pid=75389: Wed Feb 14 19:16:38 2024 00:16:01.219 read: IOPS=3993, BW=15.6MiB/s (16.4MB/s)(15.6MiB/1003msec) 00:16:01.219 slat (usec): min=10, max=4297, avg=116.68, stdev=530.12 00:16:01.219 clat (usec): min=441, max=19852, avg=15353.78, stdev=1724.65 00:16:01.219 lat (usec): min=3393, max=19868, avg=15470.47, stdev=1656.48 00:16:01.219 clat percentiles (usec): 00:16:01.219 | 1.00th=[ 7439], 5.00th=[12649], 10.00th=[13566], 20.00th=[14877], 00:16:01.219 | 30.00th=[15270], 40.00th=[15401], 50.00th=[15664], 60.00th=[15795], 00:16:01.219 | 70.00th=[16057], 80.00th=[16319], 90.00th=[16581], 95.00th=[17171], 00:16:01.219 | 99.00th=[17957], 99.50th=[17957], 99.90th=[19792], 99.95th=[19792], 00:16:01.219 | 99.99th=[19792] 00:16:01.219 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:16:01.219 slat (usec): min=13, max=4176, avg=120.88, stdev=511.77 00:16:01.219 clat (usec): min=11814, max=18842, avg=15829.42, stdev=1504.09 00:16:01.219 lat (usec): min=11843, max=19419, avg=15950.30, stdev=1465.20 00:16:01.219 clat percentiles (usec): 00:16:01.219 | 1.00th=[12387], 5.00th=[13042], 10.00th=[13566], 20.00th=[14222], 00:16:01.219 | 30.00th=[15401], 40.00th=[15795], 50.00th=[16057], 60.00th=[16581], 00:16:01.219 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17433], 95.00th=[17695], 00:16:01.219 | 99.00th=[18482], 99.50th=[18744], 99.90th=[18744], 99.95th=[18744], 00:16:01.219 | 99.99th=[18744] 00:16:01.219 bw ( KiB/s): min=16384, max=16416, per=32.80%, avg=16400.00, stdev=22.63, samples=2 00:16:01.219 iops : min= 4096, max= 4104, avg=4100.00, stdev= 5.66, samples=2 00:16:01.219 lat (usec) : 500=0.01% 00:16:01.220 lat (msec) : 4=0.31%, 10=0.48%, 20=99.20% 00:16:01.220 cpu : usr=3.89%, sys=13.47%, ctx=555, majf=0, minf=9 00:16:01.220 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:01.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:01.220 issued rwts: total=4005,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.220 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:01.220 job3: (groupid=0, jobs=1): err= 0: pid=75390: Wed Feb 14 19:16:38 2024 00:16:01.220 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:16:01.220 slat (usec): min=12, max=4762, avg=147.87, stdev=695.06 00:16:01.220 clat (usec): min=14370, max=25053, avg=19533.55, stdev=1457.66 00:16:01.220 lat (usec): min=14948, max=25070, avg=19681.41, stdev=1313.99 00:16:01.220 clat percentiles (usec): 00:16:01.220 | 1.00th=[15401], 5.00th=[16188], 10.00th=[17433], 20.00th=[19006], 00:16:01.220 | 30.00th=[19268], 40.00th=[19530], 50.00th=[19792], 60.00th=[20055], 00:16:01.220 | 70.00th=[20055], 80.00th=[20317], 90.00th=[20579], 95.00th=[21627], 00:16:01.220 | 99.00th=[22938], 99.50th=[24511], 99.90th=[25035], 99.95th=[25035], 00:16:01.220 | 99.99th=[25035] 00:16:01.220 write: IOPS=3326, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1002msec); 0 zone resets 00:16:01.220 slat (usec): min=17, max=5362, avg=154.09, stdev=643.49 00:16:01.220 clat (usec): min=425, max=24201, avg=19857.88, stdev=2694.05 00:16:01.220 lat (usec): min=4874, max=24690, avg=20011.98, stdev=2679.68 00:16:01.220 clat percentiles (usec): 00:16:01.220 | 1.00th=[ 5932], 5.00th=[16581], 10.00th=[17433], 20.00th=[18220], 00:16:01.220 | 30.00th=[18744], 40.00th=[19268], 50.00th=[19792], 60.00th=[20317], 00:16:01.220 | 70.00th=[21103], 80.00th=[22152], 90.00th=[23200], 95.00th=[23462], 00:16:01.220 | 99.00th=[23987], 99.50th=[24249], 99.90th=[24249], 99.95th=[24249], 00:16:01.220 | 99.99th=[24249] 00:16:01.220 bw ( KiB/s): min=12744, max=12896, per=25.64%, avg=12820.00, stdev=107.48, samples=2 00:16:01.220 iops : min= 3186, max= 3224, avg=3205.00, stdev=26.87, samples=2 00:16:01.220 lat (usec) : 500=0.02% 00:16:01.220 lat (msec) : 10=0.56%, 20=57.17%, 50=42.25% 00:16:01.220 cpu : usr=4.00%, sys=10.99%, ctx=483, majf=0, minf=13 00:16:01.220 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:16:01.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:01.220 issued rwts: total=3072,3333,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.220 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:01.220 00:16:01.220 Run status group 0 (all jobs): 00:16:01.220 READ: bw=45.4MiB/s (47.6MB/s), 8223KiB/s-15.6MiB/s (8420kB/s-16.4MB/s), io=45.6MiB (47.8MB), run=1002-1004msec 00:16:01.220 WRITE: bw=48.8MiB/s (51.2MB/s), 9.96MiB/s-16.0MiB/s (10.4MB/s-16.7MB/s), io=49.0MiB (51.4MB), run=1002-1004msec 00:16:01.220 00:16:01.220 Disk stats (read/write): 00:16:01.220 nvme0n1: ios=2001/2048, merge=0/0, ticks=12553/13279, in_queue=25832, util=88.97% 00:16:01.220 nvme0n2: ios=2096/2297, merge=0/0, ticks=12699/12572, in_queue=25271, util=89.27% 00:16:01.220 nvme0n3: ios=3366/3584, merge=0/0, ticks=12269/12162, in_queue=24431, util=88.67% 00:16:01.220 nvme0n4: ios=2560/2927, merge=0/0, ticks=12017/13449, in_queue=25466, util=89.59% 00:16:01.220 19:16:38 -- target/fio.sh@55 -- # sync 00:16:01.220 19:16:38 -- target/fio.sh@59 -- # fio_pid=75403 00:16:01.220 19:16:38 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:01.220 19:16:38 -- target/fio.sh@61 -- # sleep 3 00:16:01.220 [global] 00:16:01.220 thread=1 00:16:01.220 invalidate=1 00:16:01.220 rw=read 00:16:01.220 time_based=1 00:16:01.220 runtime=10 00:16:01.220 ioengine=libaio 00:16:01.220 direct=1 00:16:01.220 bs=4096 00:16:01.220 iodepth=1 00:16:01.220 norandommap=1 00:16:01.220 numjobs=1 00:16:01.220 00:16:01.220 [job0] 00:16:01.220 filename=/dev/nvme0n1 00:16:01.220 [job1] 00:16:01.220 filename=/dev/nvme0n2 00:16:01.220 [job2] 00:16:01.220 filename=/dev/nvme0n3 00:16:01.220 [job3] 00:16:01.220 filename=/dev/nvme0n4 00:16:01.220 Could not set queue depth (nvme0n1) 00:16:01.220 Could not set queue depth (nvme0n2) 00:16:01.220 Could not set queue depth (nvme0n3) 00:16:01.220 Could not set queue depth (nvme0n4) 00:16:01.220 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:01.220 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:01.220 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:01.220 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:01.220 fio-3.35 00:16:01.220 Starting 4 threads 00:16:04.507 19:16:41 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:04.507 fio: pid=75446, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:04.507 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=42491904, buflen=4096 00:16:04.507 19:16:41 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:04.766 fio: pid=75445, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:04.766 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=48054272, buflen=4096 00:16:04.766 19:16:41 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:04.766 19:16:41 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:05.025 fio: pid=75443, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:05.025 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=9818112, buflen=4096 00:16:05.025 19:16:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:05.025 19:16:42 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:05.284 fio: pid=75444, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:05.284 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=13344768, buflen=4096 00:16:05.284 00:16:05.284 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=75443: Wed Feb 14 19:16:42 2024 00:16:05.284 read: IOPS=5417, BW=21.2MiB/s (22.2MB/s)(73.4MiB/3467msec) 00:16:05.284 slat (usec): min=11, max=12556, avg=18.67, stdev=157.27 00:16:05.284 clat (usec): min=127, max=2214, avg=164.59, stdev=44.05 00:16:05.284 lat (usec): min=145, max=12755, avg=183.26, stdev=163.98 00:16:05.284 clat percentiles (usec): 00:16:05.284 | 1.00th=[ 139], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 147], 00:16:05.284 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 157], 00:16:05.284 | 70.00th=[ 161], 80.00th=[ 172], 90.00th=[ 206], 95.00th=[ 223], 00:16:05.284 | 99.00th=[ 255], 99.50th=[ 318], 99.90th=[ 586], 99.95th=[ 873], 00:16:05.284 | 99.99th=[ 1991] 00:16:05.284 bw ( KiB/s): min=17628, max=23410, per=34.20%, avg=22137.00, stdev=2286.07, samples=6 00:16:05.284 iops : min= 4407, max= 5852, avg=5534.17, stdev=571.46, samples=6 00:16:05.284 lat (usec) : 250=98.92%, 500=0.94%, 750=0.05%, 1000=0.03% 00:16:05.284 lat (msec) : 2=0.04%, 4=0.01% 00:16:05.284 cpu : usr=1.47%, sys=7.10%, ctx=18792, majf=0, minf=1 00:16:05.284 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.284 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.284 issued rwts: total=18782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.284 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:05.284 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=75444: Wed Feb 14 19:16:42 2024 00:16:05.284 read: IOPS=5250, BW=20.5MiB/s (21.5MB/s)(76.7MiB/3741msec) 00:16:05.284 slat (usec): min=10, max=11817, avg=17.63, stdev=157.43 00:16:05.284 clat (usec): min=98, max=163548, avg=171.43, stdev=1166.95 00:16:05.284 lat (usec): min=141, max=163562, avg=189.06, stdev=1177.66 00:16:05.284 clat percentiles (usec): 00:16:05.284 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:16:05.284 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 159], 00:16:05.284 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 198], 95.00th=[ 223], 00:16:05.284 | 99.00th=[ 245], 99.50th=[ 255], 99.90th=[ 652], 99.95th=[ 963], 00:16:05.284 | 99.99th=[ 3785] 00:16:05.284 bw ( KiB/s): min=12894, max=23512, per=32.67%, avg=21146.29, stdev=4060.86, samples=7 00:16:05.284 iops : min= 3223, max= 5878, avg=5286.43, stdev=1015.34, samples=7 00:16:05.284 lat (usec) : 100=0.01%, 250=99.29%, 500=0.56%, 750=0.06%, 1000=0.04% 00:16:05.284 lat (msec) : 2=0.02%, 4=0.02%, 250=0.01% 00:16:05.284 cpu : usr=1.23%, sys=6.58%, ctx=19658, majf=0, minf=1 00:16:05.284 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.284 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.284 issued rwts: total=19643,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.284 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:05.284 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=75445: Wed Feb 14 19:16:42 2024 00:16:05.284 read: IOPS=3626, BW=14.2MiB/s (14.9MB/s)(45.8MiB/3235msec) 00:16:05.284 slat (usec): min=11, max=11844, avg=16.46, stdev=134.31 00:16:05.284 clat (usec): min=36, max=2691, avg=257.85, stdev=60.24 00:16:05.284 lat (usec): min=153, max=12028, avg=274.31, stdev=145.97 00:16:05.284 clat percentiles (usec): 00:16:05.285 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 167], 20.00th=[ 206], 00:16:05.285 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:16:05.285 | 70.00th=[ 285], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 302], 00:16:05.285 | 99.00th=[ 314], 99.50th=[ 322], 99.90th=[ 478], 99.95th=[ 824], 00:16:05.285 | 99.99th=[ 2343] 00:16:05.285 bw ( KiB/s): min=13362, max=17261, per=21.83%, avg=14126.50, stdev=1543.64, samples=6 00:16:05.285 iops : min= 3340, max= 4315, avg=3531.50, stdev=385.86, samples=6 00:16:05.285 lat (usec) : 50=0.01%, 250=26.28%, 500=73.60%, 750=0.04%, 1000=0.02% 00:16:05.285 lat (msec) : 2=0.02%, 4=0.02% 00:16:05.285 cpu : usr=0.90%, sys=4.76%, ctx=11760, majf=0, minf=1 00:16:05.285 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.285 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.285 issued rwts: total=11733,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.285 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:05.285 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=75446: Wed Feb 14 19:16:42 2024 00:16:05.285 read: IOPS=3478, BW=13.6MiB/s (14.2MB/s)(40.5MiB/2983msec) 00:16:05.285 slat (usec): min=11, max=139, avg=15.99, stdev= 2.88 00:16:05.285 clat (usec): min=144, max=7779, avg=270.02, stdev=95.36 00:16:05.285 lat (usec): min=161, max=7794, avg=286.01, stdev=95.05 00:16:05.285 clat percentiles (usec): 00:16:05.285 | 1.00th=[ 172], 5.00th=[ 188], 10.00th=[ 206], 20.00th=[ 262], 00:16:05.285 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:16:05.285 | 70.00th=[ 285], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 302], 00:16:05.285 | 99.00th=[ 314], 99.50th=[ 322], 99.90th=[ 537], 99.95th=[ 1860], 00:16:05.285 | 99.99th=[ 3458] 00:16:05.285 bw ( KiB/s): min=13362, max=16616, per=21.73%, avg=14061.20, stdev=1428.63, samples=5 00:16:05.285 iops : min= 3340, max= 4154, avg=3515.20, stdev=357.22, samples=5 00:16:05.285 lat (usec) : 250=17.00%, 500=82.88%, 750=0.03%, 1000=0.01% 00:16:05.285 lat (msec) : 2=0.03%, 4=0.03%, 10=0.01% 00:16:05.285 cpu : usr=1.17%, sys=4.26%, ctx=10410, majf=0, minf=1 00:16:05.285 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.285 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.285 issued rwts: total=10375,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.285 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:05.285 00:16:05.285 Run status group 0 (all jobs): 00:16:05.285 READ: bw=63.2MiB/s (66.3MB/s), 13.6MiB/s-21.2MiB/s (14.2MB/s-22.2MB/s), io=236MiB (248MB), run=2983-3741msec 00:16:05.285 00:16:05.285 Disk stats (read/write): 00:16:05.285 nvme0n1: ios=18376/0, merge=0/0, ticks=3091/0, in_queue=3091, util=95.30% 00:16:05.285 nvme0n2: ios=18863/0, merge=0/0, ticks=3338/0, in_queue=3338, util=95.45% 00:16:05.285 nvme0n3: ios=11080/0, merge=0/0, ticks=2887/0, in_queue=2887, util=96.21% 00:16:05.285 nvme0n4: ios=9996/0, merge=0/0, ticks=2724/0, in_queue=2724, util=96.59% 00:16:05.285 19:16:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:05.285 19:16:42 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:05.544 19:16:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:05.544 19:16:42 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:05.803 19:16:43 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:05.803 19:16:43 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:06.061 19:16:43 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:06.061 19:16:43 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:06.320 19:16:43 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:06.320 19:16:43 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:06.578 19:16:43 -- target/fio.sh@69 -- # fio_status=0 00:16:06.578 19:16:43 -- target/fio.sh@70 -- # wait 75403 00:16:06.578 19:16:43 -- target/fio.sh@70 -- # fio_status=4 00:16:06.578 19:16:43 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:06.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.578 19:16:43 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:06.578 19:16:43 -- common/autotest_common.sh@1196 -- # local i=0 00:16:06.578 19:16:43 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:16:06.578 19:16:43 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:06.578 19:16:43 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:16:06.578 19:16:43 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:06.837 nvmf hotplug test: fio failed as expected 00:16:06.837 19:16:43 -- common/autotest_common.sh@1208 -- # return 0 00:16:06.837 19:16:43 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:06.837 19:16:43 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:06.837 19:16:43 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:06.837 19:16:44 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:06.837 19:16:44 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:06.837 19:16:44 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:06.837 19:16:44 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:06.837 19:16:44 -- target/fio.sh@91 -- # nvmftestfini 00:16:06.837 19:16:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:06.837 19:16:44 -- nvmf/common.sh@116 -- # sync 00:16:07.096 19:16:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:07.096 19:16:44 -- nvmf/common.sh@119 -- # set +e 00:16:07.096 19:16:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:07.096 19:16:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:07.096 rmmod nvme_tcp 00:16:07.096 rmmod nvme_fabrics 00:16:07.096 rmmod nvme_keyring 00:16:07.096 19:16:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:07.096 19:16:44 -- nvmf/common.sh@123 -- # set -e 00:16:07.096 19:16:44 -- nvmf/common.sh@124 -- # return 0 00:16:07.096 19:16:44 -- nvmf/common.sh@477 -- # '[' -n 74913 ']' 00:16:07.096 19:16:44 -- nvmf/common.sh@478 -- # killprocess 74913 00:16:07.096 19:16:44 -- common/autotest_common.sh@924 -- # '[' -z 74913 ']' 00:16:07.096 19:16:44 -- common/autotest_common.sh@928 -- # kill -0 74913 00:16:07.096 19:16:44 -- common/autotest_common.sh@929 -- # uname 00:16:07.096 19:16:44 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:07.096 19:16:44 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 74913 00:16:07.096 killing process with pid 74913 00:16:07.096 19:16:44 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:16:07.096 19:16:44 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:16:07.096 19:16:44 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 74913' 00:16:07.096 19:16:44 -- common/autotest_common.sh@943 -- # kill 74913 00:16:07.096 19:16:44 -- common/autotest_common.sh@948 -- # wait 74913 00:16:07.355 19:16:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:07.355 19:16:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:07.355 19:16:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:07.355 19:16:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:07.355 19:16:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:07.355 19:16:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.355 19:16:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.355 19:16:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.355 19:16:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:07.355 ************************************ 00:16:07.355 END TEST nvmf_fio_target 00:16:07.355 ************************************ 00:16:07.355 00:16:07.355 real 0m19.784s 00:16:07.355 user 1m14.751s 00:16:07.355 sys 0m8.940s 00:16:07.355 19:16:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:07.355 19:16:44 -- common/autotest_common.sh@10 -- # set +x 00:16:07.614 19:16:44 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:07.614 19:16:44 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:16:07.614 19:16:44 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:16:07.614 19:16:44 -- common/autotest_common.sh@10 -- # set +x 00:16:07.614 ************************************ 00:16:07.614 START TEST nvmf_bdevio 00:16:07.614 ************************************ 00:16:07.614 19:16:44 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:07.614 * Looking for test storage... 00:16:07.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:07.614 19:16:44 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:07.614 19:16:44 -- nvmf/common.sh@7 -- # uname -s 00:16:07.614 19:16:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:07.614 19:16:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:07.614 19:16:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:07.614 19:16:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:07.614 19:16:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:07.614 19:16:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:07.614 19:16:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:07.614 19:16:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:07.614 19:16:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:07.614 19:16:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:07.614 19:16:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:16:07.614 19:16:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:16:07.614 19:16:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:07.614 19:16:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:07.614 19:16:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:07.614 19:16:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:07.614 19:16:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.614 19:16:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.614 19:16:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.614 19:16:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.614 19:16:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.615 19:16:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.615 19:16:44 -- paths/export.sh@5 -- # export PATH 00:16:07.615 19:16:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.615 19:16:44 -- nvmf/common.sh@46 -- # : 0 00:16:07.615 19:16:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:07.615 19:16:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:07.615 19:16:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:07.615 19:16:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:07.615 19:16:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:07.615 19:16:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:07.615 19:16:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:07.615 19:16:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:07.615 19:16:44 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:07.615 19:16:44 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:07.615 19:16:44 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:07.615 19:16:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:07.615 19:16:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:07.615 19:16:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:07.615 19:16:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:07.615 19:16:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:07.615 19:16:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.615 19:16:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.615 19:16:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.615 19:16:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:07.615 19:16:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:07.615 19:16:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:07.615 19:16:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:07.615 19:16:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:07.615 19:16:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:07.615 19:16:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:07.615 19:16:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:07.615 19:16:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:07.615 19:16:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:07.615 19:16:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:07.615 19:16:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:07.615 19:16:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:07.615 19:16:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:07.615 19:16:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:07.615 19:16:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:07.615 19:16:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:07.615 19:16:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:07.615 19:16:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:07.615 19:16:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:07.615 Cannot find device "nvmf_tgt_br" 00:16:07.615 19:16:44 -- nvmf/common.sh@154 -- # true 00:16:07.615 19:16:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:07.615 Cannot find device "nvmf_tgt_br2" 00:16:07.615 19:16:44 -- nvmf/common.sh@155 -- # true 00:16:07.615 19:16:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:07.615 19:16:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:07.615 Cannot find device "nvmf_tgt_br" 00:16:07.615 19:16:44 -- nvmf/common.sh@157 -- # true 00:16:07.615 19:16:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:07.615 Cannot find device "nvmf_tgt_br2" 00:16:07.615 19:16:44 -- nvmf/common.sh@158 -- # true 00:16:07.615 19:16:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:07.873 19:16:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:07.873 19:16:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:07.873 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:07.873 19:16:45 -- nvmf/common.sh@161 -- # true 00:16:07.873 19:16:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:07.873 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:07.873 19:16:45 -- nvmf/common.sh@162 -- # true 00:16:07.873 19:16:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:07.873 19:16:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:07.873 19:16:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:07.873 19:16:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:07.873 19:16:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:07.873 19:16:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:07.873 19:16:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:07.873 19:16:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:07.873 19:16:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:07.874 19:16:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:07.874 19:16:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:07.874 19:16:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:07.874 19:16:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:07.874 19:16:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:07.874 19:16:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:07.874 19:16:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:07.874 19:16:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:07.874 19:16:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:07.874 19:16:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:07.874 19:16:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:07.874 19:16:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:07.874 19:16:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:07.874 19:16:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:07.874 19:16:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:07.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:07.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:16:07.874 00:16:07.874 --- 10.0.0.2 ping statistics --- 00:16:07.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.874 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:16:07.874 19:16:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:07.874 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:07.874 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.115 ms 00:16:07.874 00:16:07.874 --- 10.0.0.3 ping statistics --- 00:16:07.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.874 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:16:07.874 19:16:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:07.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:07.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:16:07.874 00:16:07.874 --- 10.0.0.1 ping statistics --- 00:16:07.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.874 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:16:07.874 19:16:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:07.874 19:16:45 -- nvmf/common.sh@421 -- # return 0 00:16:07.874 19:16:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:07.874 19:16:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:07.874 19:16:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:07.874 19:16:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:07.874 19:16:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:07.874 19:16:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:07.874 19:16:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:08.133 19:16:45 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:08.133 19:16:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:08.133 19:16:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:08.133 19:16:45 -- common/autotest_common.sh@10 -- # set +x 00:16:08.133 19:16:45 -- nvmf/common.sh@469 -- # nvmfpid=75774 00:16:08.133 19:16:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:08.133 19:16:45 -- nvmf/common.sh@470 -- # waitforlisten 75774 00:16:08.133 19:16:45 -- common/autotest_common.sh@817 -- # '[' -z 75774 ']' 00:16:08.133 19:16:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.133 19:16:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:08.133 19:16:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.133 19:16:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:08.133 19:16:45 -- common/autotest_common.sh@10 -- # set +x 00:16:08.133 [2024-02-14 19:16:45.372172] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:16:08.133 [2024-02-14 19:16:45.372288] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.133 [2024-02-14 19:16:45.514801] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:08.391 [2024-02-14 19:16:45.669819] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:08.391 [2024-02-14 19:16:45.670005] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:08.391 [2024-02-14 19:16:45.670020] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:08.392 [2024-02-14 19:16:45.670030] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:08.392 [2024-02-14 19:16:45.670254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:08.392 [2024-02-14 19:16:45.670398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:08.392 [2024-02-14 19:16:45.670552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:08.392 [2024-02-14 19:16:45.671177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:09.328 19:16:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:09.328 19:16:46 -- common/autotest_common.sh@850 -- # return 0 00:16:09.328 19:16:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:09.328 19:16:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:09.328 19:16:46 -- common/autotest_common.sh@10 -- # set +x 00:16:09.328 19:16:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.328 19:16:46 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:09.328 19:16:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.328 19:16:46 -- common/autotest_common.sh@10 -- # set +x 00:16:09.328 [2024-02-14 19:16:46.433680] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:09.328 19:16:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.328 19:16:46 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:09.328 19:16:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.328 19:16:46 -- common/autotest_common.sh@10 -- # set +x 00:16:09.328 Malloc0 00:16:09.328 19:16:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.328 19:16:46 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:09.328 19:16:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.328 19:16:46 -- common/autotest_common.sh@10 -- # set +x 00:16:09.328 19:16:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.328 19:16:46 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:09.328 19:16:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.328 19:16:46 -- common/autotest_common.sh@10 -- # set +x 00:16:09.328 19:16:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.328 19:16:46 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:09.328 19:16:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.328 19:16:46 -- common/autotest_common.sh@10 -- # set +x 00:16:09.328 [2024-02-14 19:16:46.511391] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:09.328 19:16:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.328 19:16:46 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:09.328 19:16:46 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:09.328 19:16:46 -- nvmf/common.sh@520 -- # config=() 00:16:09.328 19:16:46 -- nvmf/common.sh@520 -- # local subsystem config 00:16:09.328 19:16:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:09.328 19:16:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:09.328 { 00:16:09.328 "params": { 00:16:09.328 "name": "Nvme$subsystem", 00:16:09.328 "trtype": "$TEST_TRANSPORT", 00:16:09.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:09.328 "adrfam": "ipv4", 00:16:09.328 "trsvcid": "$NVMF_PORT", 00:16:09.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:09.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:09.328 "hdgst": ${hdgst:-false}, 00:16:09.328 "ddgst": ${ddgst:-false} 00:16:09.328 }, 00:16:09.328 "method": "bdev_nvme_attach_controller" 00:16:09.328 } 00:16:09.328 EOF 00:16:09.328 )") 00:16:09.328 19:16:46 -- nvmf/common.sh@542 -- # cat 00:16:09.328 19:16:46 -- nvmf/common.sh@544 -- # jq . 00:16:09.328 19:16:46 -- nvmf/common.sh@545 -- # IFS=, 00:16:09.328 19:16:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:09.328 "params": { 00:16:09.328 "name": "Nvme1", 00:16:09.328 "trtype": "tcp", 00:16:09.328 "traddr": "10.0.0.2", 00:16:09.328 "adrfam": "ipv4", 00:16:09.328 "trsvcid": "4420", 00:16:09.328 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:09.328 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:09.328 "hdgst": false, 00:16:09.328 "ddgst": false 00:16:09.328 }, 00:16:09.328 "method": "bdev_nvme_attach_controller" 00:16:09.328 }' 00:16:09.328 [2024-02-14 19:16:46.576784] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:16:09.328 [2024-02-14 19:16:46.576929] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75828 ] 00:16:09.328 [2024-02-14 19:16:46.720750] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:09.587 [2024-02-14 19:16:46.886972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.587 [2024-02-14 19:16:46.887139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:09.587 [2024-02-14 19:16:46.887509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.587 [2024-02-14 19:16:46.887661] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:16:09.845 [2024-02-14 19:16:47.107795] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:09.845 [2024-02-14 19:16:47.108195] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:09.845 I/O targets: 00:16:09.845 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:09.845 00:16:09.845 00:16:09.845 CUnit - A unit testing framework for C - Version 2.1-3 00:16:09.845 http://cunit.sourceforge.net/ 00:16:09.845 00:16:09.845 00:16:09.845 Suite: bdevio tests on: Nvme1n1 00:16:09.845 Test: blockdev write read block ...passed 00:16:09.845 Test: blockdev write zeroes read block ...passed 00:16:09.845 Test: blockdev write zeroes read no split ...passed 00:16:09.845 Test: blockdev write zeroes read split ...passed 00:16:09.845 Test: blockdev write zeroes read split partial ...passed 00:16:09.845 Test: blockdev reset ...[2024-02-14 19:16:47.226852] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:09.845 [2024-02-14 19:16:47.227146] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e6fd0 (9): Bad file descriptor 00:16:09.845 [2024-02-14 19:16:47.248006] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:09.845 passed 00:16:09.845 Test: blockdev write read 8 blocks ...passed 00:16:09.846 Test: blockdev write read size > 128k ...passed 00:16:09.846 Test: blockdev write read invalid size ...passed 00:16:10.105 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:10.105 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:10.105 Test: blockdev write read max offset ...passed 00:16:10.105 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:10.105 Test: blockdev writev readv 8 blocks ...passed 00:16:10.105 Test: blockdev writev readv 30 x 1block ...passed 00:16:10.105 Test: blockdev writev readv block ...passed 00:16:10.105 Test: blockdev writev readv size > 128k ...passed 00:16:10.105 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:10.105 Test: blockdev comparev and writev ...[2024-02-14 19:16:47.425569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:10.105 [2024-02-14 19:16:47.425633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:10.105 [2024-02-14 19:16:47.425659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:10.105 [2024-02-14 19:16:47.425673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.105 [2024-02-14 19:16:47.426146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:10.105 [2024-02-14 19:16:47.426182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:10.105 [2024-02-14 19:16:47.426205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:10.105 [2024-02-14 19:16:47.426218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:10.105 [2024-02-14 19:16:47.426628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:10.105 [2024-02-14 19:16:47.426665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:10.105 [2024-02-14 19:16:47.426688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:10.105 [2024-02-14 19:16:47.426701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:10.105 [2024-02-14 19:16:47.427074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:10.105 [2024-02-14 19:16:47.427109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:10.105 [2024-02-14 19:16:47.427129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:10.105 [2024-02-14 19:16:47.427139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:10.105 passed 00:16:10.105 Test: blockdev nvme passthru rw ...passed 00:16:10.105 Test: blockdev nvme passthru vendor specific ...[2024-02-14 19:16:47.511982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:10.105 [2024-02-14 19:16:47.512032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:10.105 [2024-02-14 19:16:47.512513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:10.105 [2024-02-14 19:16:47.512548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:10.105 [2024-02-14 19:16:47.512800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:10.105 [2024-02-14 19:16:47.513100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:10.105 [2024-02-14 19:16:47.513645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:10.105 [2024-02-14 19:16:47.513694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:10.105 passed 00:16:10.364 Test: blockdev nvme admin passthru ...passed 00:16:10.364 Test: blockdev copy ...passed 00:16:10.364 00:16:10.364 Run Summary: Type Total Ran Passed Failed Inactive 00:16:10.364 suites 1 1 n/a 0 0 00:16:10.364 tests 23 23 23 0 0 00:16:10.364 asserts 152 152 152 0 n/a 00:16:10.364 00:16:10.364 Elapsed time = 0.912 seconds 00:16:10.364 [2024-02-14 19:16:47.570005] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:16:10.650 19:16:47 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:10.650 19:16:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:10.650 19:16:47 -- common/autotest_common.sh@10 -- # set +x 00:16:10.650 19:16:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:10.650 19:16:47 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:10.650 19:16:47 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:10.650 19:16:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:10.650 19:16:47 -- nvmf/common.sh@116 -- # sync 00:16:10.650 19:16:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:10.650 19:16:47 -- nvmf/common.sh@119 -- # set +e 00:16:10.650 19:16:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:10.650 19:16:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:10.650 rmmod nvme_tcp 00:16:10.650 rmmod nvme_fabrics 00:16:10.650 rmmod nvme_keyring 00:16:10.650 19:16:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:10.650 19:16:48 -- nvmf/common.sh@123 -- # set -e 00:16:10.650 19:16:48 -- nvmf/common.sh@124 -- # return 0 00:16:10.650 19:16:48 -- nvmf/common.sh@477 -- # '[' -n 75774 ']' 00:16:10.650 19:16:48 -- nvmf/common.sh@478 -- # killprocess 75774 00:16:10.650 19:16:48 -- common/autotest_common.sh@924 -- # '[' -z 75774 ']' 00:16:10.650 19:16:48 -- common/autotest_common.sh@928 -- # kill -0 75774 00:16:10.650 19:16:48 -- common/autotest_common.sh@929 -- # uname 00:16:10.650 19:16:48 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:10.650 19:16:48 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 75774 00:16:10.910 19:16:48 -- common/autotest_common.sh@930 -- # process_name=reactor_3 00:16:10.910 killing process with pid 75774 00:16:10.910 19:16:48 -- common/autotest_common.sh@934 -- # '[' reactor_3 = sudo ']' 00:16:10.910 19:16:48 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 75774' 00:16:10.910 19:16:48 -- common/autotest_common.sh@943 -- # kill 75774 00:16:10.910 19:16:48 -- common/autotest_common.sh@948 -- # wait 75774 00:16:11.169 19:16:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:11.169 19:16:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:11.169 19:16:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:11.169 19:16:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:11.169 19:16:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:11.169 19:16:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.169 19:16:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.169 19:16:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.169 19:16:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:11.169 00:16:11.169 real 0m3.680s 00:16:11.169 user 0m13.335s 00:16:11.169 sys 0m0.919s 00:16:11.169 19:16:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:11.169 ************************************ 00:16:11.169 END TEST nvmf_bdevio 00:16:11.169 ************************************ 00:16:11.169 19:16:48 -- common/autotest_common.sh@10 -- # set +x 00:16:11.169 19:16:48 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:16:11.169 19:16:48 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:11.169 19:16:48 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:16:11.169 19:16:48 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:16:11.169 19:16:48 -- common/autotest_common.sh@10 -- # set +x 00:16:11.169 ************************************ 00:16:11.169 START TEST nvmf_bdevio_no_huge 00:16:11.169 ************************************ 00:16:11.169 19:16:48 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:11.428 * Looking for test storage... 00:16:11.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:11.428 19:16:48 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:11.428 19:16:48 -- nvmf/common.sh@7 -- # uname -s 00:16:11.428 19:16:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:11.428 19:16:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:11.428 19:16:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:11.428 19:16:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:11.428 19:16:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:11.428 19:16:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:11.428 19:16:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:11.428 19:16:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:11.428 19:16:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:11.428 19:16:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:11.428 19:16:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:16:11.428 19:16:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:16:11.428 19:16:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:11.428 19:16:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:11.428 19:16:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:11.428 19:16:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:11.428 19:16:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.428 19:16:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.428 19:16:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.428 19:16:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.428 19:16:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.428 19:16:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.428 19:16:48 -- paths/export.sh@5 -- # export PATH 00:16:11.428 19:16:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.428 19:16:48 -- nvmf/common.sh@46 -- # : 0 00:16:11.428 19:16:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:11.428 19:16:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:11.428 19:16:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:11.428 19:16:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:11.428 19:16:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:11.428 19:16:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:11.428 19:16:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:11.428 19:16:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:11.428 19:16:48 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:11.428 19:16:48 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:11.428 19:16:48 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:11.428 19:16:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:11.428 19:16:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:11.428 19:16:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:11.428 19:16:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:11.428 19:16:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:11.428 19:16:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.428 19:16:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.428 19:16:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.428 19:16:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:11.428 19:16:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:11.428 19:16:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:11.428 19:16:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:11.428 19:16:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:11.428 19:16:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:11.428 19:16:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:11.428 19:16:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:11.428 19:16:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:11.428 19:16:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:11.428 19:16:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:11.428 19:16:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:11.429 19:16:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:11.429 19:16:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:11.429 19:16:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:11.429 19:16:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:11.429 19:16:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:11.429 19:16:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:11.429 19:16:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:11.429 19:16:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:11.429 Cannot find device "nvmf_tgt_br" 00:16:11.429 19:16:48 -- nvmf/common.sh@154 -- # true 00:16:11.429 19:16:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:11.429 Cannot find device "nvmf_tgt_br2" 00:16:11.429 19:16:48 -- nvmf/common.sh@155 -- # true 00:16:11.429 19:16:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:11.429 19:16:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:11.429 Cannot find device "nvmf_tgt_br" 00:16:11.429 19:16:48 -- nvmf/common.sh@157 -- # true 00:16:11.429 19:16:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:11.429 Cannot find device "nvmf_tgt_br2" 00:16:11.429 19:16:48 -- nvmf/common.sh@158 -- # true 00:16:11.429 19:16:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:11.429 19:16:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:11.429 19:16:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:11.429 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:11.429 19:16:48 -- nvmf/common.sh@161 -- # true 00:16:11.429 19:16:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:11.429 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:11.429 19:16:48 -- nvmf/common.sh@162 -- # true 00:16:11.429 19:16:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:11.429 19:16:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:11.429 19:16:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:11.429 19:16:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:11.429 19:16:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:11.429 19:16:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:11.429 19:16:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:11.687 19:16:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:11.687 19:16:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:11.687 19:16:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:11.687 19:16:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:11.687 19:16:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:11.687 19:16:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:11.687 19:16:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:11.687 19:16:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:11.687 19:16:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:11.688 19:16:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:11.688 19:16:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:11.688 19:16:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:11.688 19:16:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:11.688 19:16:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:11.688 19:16:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:11.688 19:16:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:11.688 19:16:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:11.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:11.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:16:11.688 00:16:11.688 --- 10.0.0.2 ping statistics --- 00:16:11.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.688 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:16:11.688 19:16:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:11.688 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:11.688 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:16:11.688 00:16:11.688 --- 10.0.0.3 ping statistics --- 00:16:11.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.688 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:16:11.688 19:16:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:11.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:11.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:16:11.688 00:16:11.688 --- 10.0.0.1 ping statistics --- 00:16:11.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.688 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:16:11.688 19:16:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:11.688 19:16:48 -- nvmf/common.sh@421 -- # return 0 00:16:11.688 19:16:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:11.688 19:16:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:11.688 19:16:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:11.688 19:16:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:11.688 19:16:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:11.688 19:16:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:11.688 19:16:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:11.688 19:16:48 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:11.688 19:16:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:11.688 19:16:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:11.688 19:16:48 -- common/autotest_common.sh@10 -- # set +x 00:16:11.688 19:16:48 -- nvmf/common.sh@469 -- # nvmfpid=76010 00:16:11.688 19:16:48 -- nvmf/common.sh@470 -- # waitforlisten 76010 00:16:11.688 19:16:48 -- common/autotest_common.sh@817 -- # '[' -z 76010 ']' 00:16:11.688 19:16:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.688 19:16:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:11.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.688 19:16:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.688 19:16:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:11.688 19:16:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:11.688 19:16:48 -- common/autotest_common.sh@10 -- # set +x 00:16:11.688 [2024-02-14 19:16:49.064717] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:16:11.688 [2024-02-14 19:16:49.064879] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:11.947 [2024-02-14 19:16:49.218964] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:11.947 [2024-02-14 19:16:49.343415] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:11.947 [2024-02-14 19:16:49.343593] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:11.947 [2024-02-14 19:16:49.343607] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:11.947 [2024-02-14 19:16:49.343617] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:11.947 [2024-02-14 19:16:49.343789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:11.947 [2024-02-14 19:16:49.344447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:11.947 [2024-02-14 19:16:49.344810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:11.947 [2024-02-14 19:16:49.344814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:12.883 19:16:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:12.883 19:16:50 -- common/autotest_common.sh@850 -- # return 0 00:16:12.883 19:16:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:12.883 19:16:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:12.883 19:16:50 -- common/autotest_common.sh@10 -- # set +x 00:16:12.883 19:16:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:12.883 19:16:50 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:12.883 19:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:12.883 19:16:50 -- common/autotest_common.sh@10 -- # set +x 00:16:12.883 [2024-02-14 19:16:50.089137] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:12.883 19:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:12.883 19:16:50 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:12.883 19:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:12.883 19:16:50 -- common/autotest_common.sh@10 -- # set +x 00:16:12.883 Malloc0 00:16:12.883 19:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:12.883 19:16:50 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:12.883 19:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:12.883 19:16:50 -- common/autotest_common.sh@10 -- # set +x 00:16:12.883 19:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:12.883 19:16:50 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:12.883 19:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:12.883 19:16:50 -- common/autotest_common.sh@10 -- # set +x 00:16:12.883 19:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:12.883 19:16:50 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:12.883 19:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:12.883 19:16:50 -- common/autotest_common.sh@10 -- # set +x 00:16:12.883 [2024-02-14 19:16:50.133568] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:12.883 19:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:12.883 19:16:50 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:12.883 19:16:50 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:12.883 19:16:50 -- nvmf/common.sh@520 -- # config=() 00:16:12.883 19:16:50 -- nvmf/common.sh@520 -- # local subsystem config 00:16:12.883 19:16:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:12.883 19:16:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:12.883 { 00:16:12.883 "params": { 00:16:12.883 "name": "Nvme$subsystem", 00:16:12.883 "trtype": "$TEST_TRANSPORT", 00:16:12.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:12.883 "adrfam": "ipv4", 00:16:12.883 "trsvcid": "$NVMF_PORT", 00:16:12.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:12.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:12.883 "hdgst": ${hdgst:-false}, 00:16:12.883 "ddgst": ${ddgst:-false} 00:16:12.883 }, 00:16:12.883 "method": "bdev_nvme_attach_controller" 00:16:12.883 } 00:16:12.883 EOF 00:16:12.883 )") 00:16:12.883 19:16:50 -- nvmf/common.sh@542 -- # cat 00:16:12.883 19:16:50 -- nvmf/common.sh@544 -- # jq . 00:16:12.883 19:16:50 -- nvmf/common.sh@545 -- # IFS=, 00:16:12.883 19:16:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:12.883 "params": { 00:16:12.883 "name": "Nvme1", 00:16:12.883 "trtype": "tcp", 00:16:12.883 "traddr": "10.0.0.2", 00:16:12.883 "adrfam": "ipv4", 00:16:12.883 "trsvcid": "4420", 00:16:12.883 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:12.883 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:12.883 "hdgst": false, 00:16:12.883 "ddgst": false 00:16:12.883 }, 00:16:12.883 "method": "bdev_nvme_attach_controller" 00:16:12.883 }' 00:16:12.883 [2024-02-14 19:16:50.199323] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:16:12.883 [2024-02-14 19:16:50.199448] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid76064 ] 00:16:13.141 [2024-02-14 19:16:50.347871] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:13.141 [2024-02-14 19:16:50.505653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.141 [2024-02-14 19:16:50.505806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:13.141 [2024-02-14 19:16:50.505811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.141 [2024-02-14 19:16:50.506085] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:16:13.400 [2024-02-14 19:16:50.705520] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:13.400 [2024-02-14 19:16:50.705583] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:13.400 I/O targets: 00:16:13.400 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:13.400 00:16:13.400 00:16:13.400 CUnit - A unit testing framework for C - Version 2.1-3 00:16:13.400 http://cunit.sourceforge.net/ 00:16:13.400 00:16:13.400 00:16:13.400 Suite: bdevio tests on: Nvme1n1 00:16:13.400 Test: blockdev write read block ...passed 00:16:13.400 Test: blockdev write zeroes read block ...passed 00:16:13.400 Test: blockdev write zeroes read no split ...passed 00:16:13.659 Test: blockdev write zeroes read split ...passed 00:16:13.659 Test: blockdev write zeroes read split partial ...passed 00:16:13.659 Test: blockdev reset ...[2024-02-14 19:16:50.841216] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:13.659 [2024-02-14 19:16:50.841382] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2080360 (9): Bad file descriptor 00:16:13.659 [2024-02-14 19:16:50.854313] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:13.659 passed 00:16:13.659 Test: blockdev write read 8 blocks ...passed 00:16:13.659 Test: blockdev write read size > 128k ...passed 00:16:13.659 Test: blockdev write read invalid size ...passed 00:16:13.659 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:13.659 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:13.659 Test: blockdev write read max offset ...passed 00:16:13.659 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:13.659 Test: blockdev writev readv 8 blocks ...passed 00:16:13.659 Test: blockdev writev readv 30 x 1block ...passed 00:16:13.659 Test: blockdev writev readv block ...passed 00:16:13.659 Test: blockdev writev readv size > 128k ...passed 00:16:13.659 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:13.659 Test: blockdev comparev and writev ...[2024-02-14 19:16:51.028928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:13.659 [2024-02-14 19:16:51.028982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:13.659 [2024-02-14 19:16:51.029005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:13.659 [2024-02-14 19:16:51.029018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:13.659 [2024-02-14 19:16:51.029422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:13.659 [2024-02-14 19:16:51.029451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:13.659 [2024-02-14 19:16:51.029469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:13.659 [2024-02-14 19:16:51.029480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:13.659 [2024-02-14 19:16:51.029873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:13.659 [2024-02-14 19:16:51.029963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:13.659 [2024-02-14 19:16:51.029983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:13.659 [2024-02-14 19:16:51.029994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:13.659 [2024-02-14 19:16:51.030468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:13.659 [2024-02-14 19:16:51.030509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:13.659 [2024-02-14 19:16:51.030528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:13.659 [2024-02-14 19:16:51.030554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:13.659 passed 00:16:13.918 Test: blockdev nvme passthru rw ...passed 00:16:13.918 Test: blockdev nvme passthru vendor specific ...[2024-02-14 19:16:51.113982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:13.918 [2024-02-14 19:16:51.114035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:13.918 [2024-02-14 19:16:51.114382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:13.918 [2024-02-14 19:16:51.114470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:13.918 [2024-02-14 19:16:51.114688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:13.918 [2024-02-14 19:16:51.114709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:13.918 [2024-02-14 19:16:51.115088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:13.918 [2024-02-14 19:16:51.115123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:13.918 passed 00:16:13.918 Test: blockdev nvme admin passthru ...passed 00:16:13.918 Test: blockdev copy ...passed 00:16:13.918 00:16:13.918 Run Summary: Type Total Ran Passed Failed Inactive 00:16:13.918 suites 1 1 n/a 0 0 00:16:13.918 tests 23 23 23 0 0 00:16:13.918 asserts 152 152 152 0 n/a 00:16:13.918 00:16:13.918 Elapsed time = 0.938 seconds 00:16:13.918 [2024-02-14 19:16:51.176123] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:16:14.484 19:16:51 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:14.484 19:16:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.484 19:16:51 -- common/autotest_common.sh@10 -- # set +x 00:16:14.484 19:16:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.484 19:16:51 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:14.484 19:16:51 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:14.484 19:16:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:14.484 19:16:51 -- nvmf/common.sh@116 -- # sync 00:16:14.484 19:16:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:14.484 19:16:51 -- nvmf/common.sh@119 -- # set +e 00:16:14.484 19:16:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:14.484 19:16:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:14.484 rmmod nvme_tcp 00:16:14.484 rmmod nvme_fabrics 00:16:14.484 rmmod nvme_keyring 00:16:14.484 19:16:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:14.484 19:16:51 -- nvmf/common.sh@123 -- # set -e 00:16:14.484 19:16:51 -- nvmf/common.sh@124 -- # return 0 00:16:14.484 19:16:51 -- nvmf/common.sh@477 -- # '[' -n 76010 ']' 00:16:14.484 19:16:51 -- nvmf/common.sh@478 -- # killprocess 76010 00:16:14.484 19:16:51 -- common/autotest_common.sh@924 -- # '[' -z 76010 ']' 00:16:14.484 19:16:51 -- common/autotest_common.sh@928 -- # kill -0 76010 00:16:14.484 19:16:51 -- common/autotest_common.sh@929 -- # uname 00:16:14.484 19:16:51 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:14.484 19:16:51 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 76010 00:16:14.485 killing process with pid 76010 00:16:14.485 19:16:51 -- common/autotest_common.sh@930 -- # process_name=reactor_3 00:16:14.485 19:16:51 -- common/autotest_common.sh@934 -- # '[' reactor_3 = sudo ']' 00:16:14.485 19:16:51 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 76010' 00:16:14.485 19:16:51 -- common/autotest_common.sh@943 -- # kill 76010 00:16:14.485 19:16:51 -- common/autotest_common.sh@948 -- # wait 76010 00:16:15.052 19:16:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:15.052 19:16:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:15.052 19:16:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:15.052 19:16:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:15.052 19:16:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:15.052 19:16:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.052 19:16:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.052 19:16:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.052 19:16:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:15.052 ************************************ 00:16:15.052 END TEST nvmf_bdevio_no_huge 00:16:15.052 ************************************ 00:16:15.052 00:16:15.052 real 0m3.837s 00:16:15.052 user 0m13.917s 00:16:15.052 sys 0m1.468s 00:16:15.052 19:16:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:15.052 19:16:52 -- common/autotest_common.sh@10 -- # set +x 00:16:15.052 19:16:52 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:15.052 19:16:52 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:16:15.052 19:16:52 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:16:15.052 19:16:52 -- common/autotest_common.sh@10 -- # set +x 00:16:15.052 ************************************ 00:16:15.052 START TEST nvmf_tls 00:16:15.052 ************************************ 00:16:15.052 19:16:52 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:15.310 * Looking for test storage... 00:16:15.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:15.310 19:16:52 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:15.310 19:16:52 -- nvmf/common.sh@7 -- # uname -s 00:16:15.310 19:16:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:15.310 19:16:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:15.310 19:16:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:15.310 19:16:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:15.310 19:16:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:15.310 19:16:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:15.310 19:16:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:15.310 19:16:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:15.310 19:16:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:15.310 19:16:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:15.310 19:16:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:16:15.311 19:16:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:16:15.311 19:16:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:15.311 19:16:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:15.311 19:16:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:15.311 19:16:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:15.311 19:16:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:15.311 19:16:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:15.311 19:16:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:15.311 19:16:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.311 19:16:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.311 19:16:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.311 19:16:52 -- paths/export.sh@5 -- # export PATH 00:16:15.311 19:16:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.311 19:16:52 -- nvmf/common.sh@46 -- # : 0 00:16:15.311 19:16:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:15.311 19:16:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:15.311 19:16:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:15.311 19:16:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:15.311 19:16:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:15.311 19:16:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:15.311 19:16:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:15.311 19:16:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:15.311 19:16:52 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:15.311 19:16:52 -- target/tls.sh@71 -- # nvmftestinit 00:16:15.311 19:16:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:15.311 19:16:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:15.311 19:16:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:15.311 19:16:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:15.311 19:16:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:15.311 19:16:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.311 19:16:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.311 19:16:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.311 19:16:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:15.311 19:16:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:15.311 19:16:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:15.311 19:16:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:15.311 19:16:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:15.311 19:16:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:15.311 19:16:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:15.311 19:16:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:15.311 19:16:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:15.311 19:16:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:15.311 19:16:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:15.311 19:16:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:15.311 19:16:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:15.311 19:16:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:15.311 19:16:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:15.311 19:16:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:15.311 19:16:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:15.311 19:16:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:15.311 19:16:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:15.311 19:16:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:15.311 Cannot find device "nvmf_tgt_br" 00:16:15.311 19:16:52 -- nvmf/common.sh@154 -- # true 00:16:15.311 19:16:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:15.311 Cannot find device "nvmf_tgt_br2" 00:16:15.311 19:16:52 -- nvmf/common.sh@155 -- # true 00:16:15.311 19:16:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:15.311 19:16:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:15.311 Cannot find device "nvmf_tgt_br" 00:16:15.311 19:16:52 -- nvmf/common.sh@157 -- # true 00:16:15.311 19:16:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:15.311 Cannot find device "nvmf_tgt_br2" 00:16:15.311 19:16:52 -- nvmf/common.sh@158 -- # true 00:16:15.311 19:16:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:15.311 19:16:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:15.311 19:16:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:15.311 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:15.311 19:16:52 -- nvmf/common.sh@161 -- # true 00:16:15.311 19:16:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:15.311 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:15.311 19:16:52 -- nvmf/common.sh@162 -- # true 00:16:15.311 19:16:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:15.311 19:16:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:15.311 19:16:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:15.311 19:16:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:15.311 19:16:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:15.569 19:16:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:15.569 19:16:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:15.569 19:16:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:15.569 19:16:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:15.569 19:16:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:15.569 19:16:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:15.569 19:16:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:15.569 19:16:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:15.569 19:16:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:15.569 19:16:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:15.569 19:16:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:15.569 19:16:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:15.569 19:16:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:15.569 19:16:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:15.569 19:16:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:15.569 19:16:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:15.569 19:16:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:15.569 19:16:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:15.569 19:16:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:15.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:15.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:16:15.569 00:16:15.569 --- 10.0.0.2 ping statistics --- 00:16:15.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.569 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:16:15.569 19:16:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:15.569 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:15.569 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:16:15.569 00:16:15.569 --- 10.0.0.3 ping statistics --- 00:16:15.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.569 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:16:15.570 19:16:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:15.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:15.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:16:15.570 00:16:15.570 --- 10.0.0.1 ping statistics --- 00:16:15.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.570 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:16:15.570 19:16:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:15.570 19:16:52 -- nvmf/common.sh@421 -- # return 0 00:16:15.570 19:16:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:15.570 19:16:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:15.570 19:16:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:15.570 19:16:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:15.570 19:16:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:15.570 19:16:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:15.570 19:16:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:15.570 19:16:52 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:15.570 19:16:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:15.570 19:16:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:15.570 19:16:52 -- common/autotest_common.sh@10 -- # set +x 00:16:15.570 19:16:52 -- nvmf/common.sh@469 -- # nvmfpid=76253 00:16:15.570 19:16:52 -- nvmf/common.sh@470 -- # waitforlisten 76253 00:16:15.570 19:16:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:15.570 19:16:52 -- common/autotest_common.sh@817 -- # '[' -z 76253 ']' 00:16:15.570 19:16:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.570 19:16:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:15.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.570 19:16:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.570 19:16:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:15.570 19:16:52 -- common/autotest_common.sh@10 -- # set +x 00:16:15.570 [2024-02-14 19:16:52.985089] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:16:15.570 [2024-02-14 19:16:52.985544] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.828 [2024-02-14 19:16:53.125182] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.087 [2024-02-14 19:16:53.245231] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:16.087 [2024-02-14 19:16:53.245383] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:16.087 [2024-02-14 19:16:53.245398] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:16.087 [2024-02-14 19:16:53.245407] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:16.087 [2024-02-14 19:16:53.245446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.653 19:16:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:16.653 19:16:53 -- common/autotest_common.sh@850 -- # return 0 00:16:16.653 19:16:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:16.653 19:16:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:16.653 19:16:53 -- common/autotest_common.sh@10 -- # set +x 00:16:16.653 19:16:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:16.653 19:16:53 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:16:16.653 19:16:53 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:16.910 true 00:16:16.910 19:16:54 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:16.910 19:16:54 -- target/tls.sh@82 -- # jq -r .tls_version 00:16:17.168 19:16:54 -- target/tls.sh@82 -- # version=0 00:16:17.168 19:16:54 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:16:17.168 19:16:54 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:17.426 19:16:54 -- target/tls.sh@90 -- # jq -r .tls_version 00:16:17.426 19:16:54 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:17.684 19:16:55 -- target/tls.sh@90 -- # version=13 00:16:17.684 19:16:55 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:16:17.684 19:16:55 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:17.942 19:16:55 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:17.942 19:16:55 -- target/tls.sh@98 -- # jq -r .tls_version 00:16:18.201 19:16:55 -- target/tls.sh@98 -- # version=7 00:16:18.201 19:16:55 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:16:18.201 19:16:55 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:18.201 19:16:55 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:18.459 19:16:55 -- target/tls.sh@105 -- # ktls=false 00:16:18.459 19:16:55 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:16:18.459 19:16:55 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:18.718 19:16:56 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:18.718 19:16:56 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:18.977 19:16:56 -- target/tls.sh@113 -- # ktls=true 00:16:18.977 19:16:56 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:16:18.977 19:16:56 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:19.235 19:16:56 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:16:19.235 19:16:56 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:19.494 19:16:56 -- target/tls.sh@121 -- # ktls=false 00:16:19.494 19:16:56 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:16:19.494 19:16:56 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:16:19.494 19:16:56 -- target/tls.sh@49 -- # local key hash crc 00:16:19.494 19:16:56 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:16:19.494 19:16:56 -- target/tls.sh@51 -- # hash=01 00:16:19.494 19:16:56 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:16:19.494 19:16:56 -- target/tls.sh@52 -- # gzip -1 -c 00:16:19.494 19:16:56 -- target/tls.sh@52 -- # tail -c8 00:16:19.494 19:16:56 -- target/tls.sh@52 -- # head -c 4 00:16:19.494 19:16:56 -- target/tls.sh@52 -- # crc='p$H�' 00:16:19.494 19:16:56 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:19.494 19:16:56 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:16:19.494 19:16:56 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:19.494 19:16:56 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:19.494 19:16:56 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:16:19.494 19:16:56 -- target/tls.sh@49 -- # local key hash crc 00:16:19.494 19:16:56 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:16:19.494 19:16:56 -- target/tls.sh@51 -- # hash=01 00:16:19.494 19:16:56 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:16:19.494 19:16:56 -- target/tls.sh@52 -- # gzip -1 -c 00:16:19.494 19:16:56 -- target/tls.sh@52 -- # tail -c8 00:16:19.494 19:16:56 -- target/tls.sh@52 -- # head -c 4 00:16:19.494 19:16:56 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:16:19.494 19:16:56 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:19.494 19:16:56 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:16:19.494 19:16:56 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:19.494 19:16:56 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:19.494 19:16:56 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:19.494 19:16:56 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:19.494 19:16:56 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:19.494 19:16:56 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:19.494 19:16:56 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:19.494 19:16:56 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:19.494 19:16:56 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:19.753 19:16:57 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:20.319 19:16:57 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:20.319 19:16:57 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:20.319 19:16:57 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:20.319 [2024-02-14 19:16:57.710649] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:20.578 19:16:57 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:20.837 19:16:58 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:20.837 [2024-02-14 19:16:58.214771] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:20.837 [2024-02-14 19:16:58.215029] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:20.837 19:16:58 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:21.096 malloc0 00:16:21.096 19:16:58 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:21.355 19:16:58 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:21.614 19:16:58 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:33.829 Initializing NVMe Controllers 00:16:33.829 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:33.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:33.829 Initialization complete. Launching workers. 00:16:33.829 ======================================================== 00:16:33.829 Latency(us) 00:16:33.829 Device Information : IOPS MiB/s Average min max 00:16:33.829 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9319.86 36.41 6868.58 1682.62 8023.97 00:16:33.829 ======================================================== 00:16:33.829 Total : 9319.86 36.41 6868.58 1682.62 8023.97 00:16:33.829 00:16:33.830 19:17:09 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:33.830 19:17:09 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:33.830 19:17:09 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:33.830 19:17:09 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:33.830 19:17:09 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:33.830 19:17:09 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:33.830 19:17:09 -- target/tls.sh@28 -- # bdevperf_pid=76622 00:16:33.830 19:17:09 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:33.830 19:17:09 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:33.830 19:17:09 -- target/tls.sh@31 -- # waitforlisten 76622 /var/tmp/bdevperf.sock 00:16:33.830 19:17:09 -- common/autotest_common.sh@817 -- # '[' -z 76622 ']' 00:16:33.830 19:17:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:33.830 19:17:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:33.830 19:17:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:33.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:33.830 19:17:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:33.830 19:17:09 -- common/autotest_common.sh@10 -- # set +x 00:16:33.830 [2024-02-14 19:17:09.204286] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:16:33.830 [2024-02-14 19:17:09.204446] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76622 ] 00:16:33.830 [2024-02-14 19:17:09.343692] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.830 [2024-02-14 19:17:09.473048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:33.830 19:17:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:33.830 19:17:10 -- common/autotest_common.sh@850 -- # return 0 00:16:33.830 19:17:10 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:33.830 [2024-02-14 19:17:10.426104] bdev_nvme_rpc.c: 478:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:33.830 TLSTESTn1 00:16:33.830 19:17:10 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:33.830 Running I/O for 10 seconds... 00:16:43.804 00:16:43.804 Latency(us) 00:16:43.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.804 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:43.804 Verification LBA range: start 0x0 length 0x2000 00:16:43.804 TLSTESTn1 : 10.03 4242.84 16.57 0.00 0.00 30107.95 5004.57 32648.84 00:16:43.804 =================================================================================================================== 00:16:43.804 Total : 4242.84 16.57 0.00 0.00 30107.95 5004.57 32648.84 00:16:43.804 0 00:16:43.804 19:17:20 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:43.804 19:17:20 -- target/tls.sh@45 -- # killprocess 76622 00:16:43.804 19:17:20 -- common/autotest_common.sh@924 -- # '[' -z 76622 ']' 00:16:43.804 19:17:20 -- common/autotest_common.sh@928 -- # kill -0 76622 00:16:43.804 19:17:20 -- common/autotest_common.sh@929 -- # uname 00:16:43.804 19:17:20 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:43.804 19:17:20 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 76622 00:16:43.804 killing process with pid 76622 00:16:43.804 Received shutdown signal, test time was about 10.000000 seconds 00:16:43.804 00:16:43.804 Latency(us) 00:16:43.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.804 =================================================================================================================== 00:16:43.804 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:43.804 19:17:20 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:16:43.804 19:17:20 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:16:43.804 19:17:20 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 76622' 00:16:43.804 19:17:20 -- common/autotest_common.sh@943 -- # kill 76622 00:16:43.804 19:17:20 -- common/autotest_common.sh@948 -- # wait 76622 00:16:43.804 19:17:20 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:43.804 19:17:20 -- common/autotest_common.sh@638 -- # local es=0 00:16:43.804 19:17:20 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:43.804 19:17:20 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:16:43.804 19:17:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:43.804 19:17:20 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:16:43.804 19:17:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:43.804 19:17:20 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:43.804 19:17:20 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:43.804 19:17:20 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:43.804 19:17:20 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:43.804 19:17:20 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:16:43.804 19:17:20 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:43.804 19:17:20 -- target/tls.sh@28 -- # bdevperf_pid=76771 00:16:43.804 19:17:20 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:43.804 19:17:20 -- target/tls.sh@31 -- # waitforlisten 76771 /var/tmp/bdevperf.sock 00:16:43.804 19:17:20 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:43.804 19:17:20 -- common/autotest_common.sh@817 -- # '[' -z 76771 ']' 00:16:43.804 19:17:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:43.804 19:17:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:43.804 19:17:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:43.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:43.804 19:17:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:43.804 19:17:20 -- common/autotest_common.sh@10 -- # set +x 00:16:43.804 [2024-02-14 19:17:21.058870] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:16:43.804 [2024-02-14 19:17:21.059004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76771 ] 00:16:43.804 [2024-02-14 19:17:21.197566] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.063 [2024-02-14 19:17:21.321927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:45.000 19:17:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:45.000 19:17:22 -- common/autotest_common.sh@850 -- # return 0 00:16:45.000 19:17:22 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:45.000 [2024-02-14 19:17:22.288610] bdev_nvme_rpc.c: 478:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:45.000 [2024-02-14 19:17:22.299168] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:45.000 [2024-02-14 19:17:22.299887] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x125a940 (107): Transport endpoint is not connected 00:16:45.000 [2024-02-14 19:17:22.300852] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x125a940 (9): Bad file descriptor 00:16:45.000 [2024-02-14 19:17:22.301849] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:45.000 [2024-02-14 19:17:22.301878] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:45.000 [2024-02-14 19:17:22.301894] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:45.000 2024/02/14 19:17:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:45.000 request: 00:16:45.000 { 00:16:45.000 "method": "bdev_nvme_attach_controller", 00:16:45.000 "params": { 00:16:45.000 "name": "TLSTEST", 00:16:45.000 "trtype": "tcp", 00:16:45.000 "traddr": "10.0.0.2", 00:16:45.000 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:45.000 "adrfam": "ipv4", 00:16:45.000 "trsvcid": "4420", 00:16:45.000 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:45.000 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:16:45.000 } 00:16:45.000 } 00:16:45.000 Got JSON-RPC error response 00:16:45.000 GoRPCClient: error on JSON-RPC call 00:16:45.000 19:17:22 -- target/tls.sh@36 -- # killprocess 76771 00:16:45.000 19:17:22 -- common/autotest_common.sh@924 -- # '[' -z 76771 ']' 00:16:45.000 19:17:22 -- common/autotest_common.sh@928 -- # kill -0 76771 00:16:45.000 19:17:22 -- common/autotest_common.sh@929 -- # uname 00:16:45.000 19:17:22 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:45.000 19:17:22 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 76771 00:16:45.000 killing process with pid 76771 00:16:45.000 Received shutdown signal, test time was about 10.000000 seconds 00:16:45.000 00:16:45.000 Latency(us) 00:16:45.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.000 =================================================================================================================== 00:16:45.000 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:45.000 19:17:22 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:16:45.000 19:17:22 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:16:45.000 19:17:22 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 76771' 00:16:45.000 19:17:22 -- common/autotest_common.sh@943 -- # kill 76771 00:16:45.000 19:17:22 -- common/autotest_common.sh@948 -- # wait 76771 00:16:45.259 19:17:22 -- target/tls.sh@37 -- # return 1 00:16:45.259 19:17:22 -- common/autotest_common.sh@641 -- # es=1 00:16:45.259 19:17:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:45.259 19:17:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:45.259 19:17:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:45.259 19:17:22 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:45.259 19:17:22 -- common/autotest_common.sh@638 -- # local es=0 00:16:45.259 19:17:22 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:45.259 19:17:22 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:16:45.259 19:17:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:45.259 19:17:22 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:16:45.259 19:17:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:45.259 19:17:22 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:45.259 19:17:22 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:45.259 19:17:22 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:45.259 19:17:22 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:45.259 19:17:22 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:45.259 19:17:22 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:45.259 19:17:22 -- target/tls.sh@28 -- # bdevperf_pid=76817 00:16:45.259 19:17:22 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:45.259 19:17:22 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:45.259 19:17:22 -- target/tls.sh@31 -- # waitforlisten 76817 /var/tmp/bdevperf.sock 00:16:45.259 19:17:22 -- common/autotest_common.sh@817 -- # '[' -z 76817 ']' 00:16:45.259 19:17:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:45.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:45.259 19:17:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:45.259 19:17:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:45.259 19:17:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:45.259 19:17:22 -- common/autotest_common.sh@10 -- # set +x 00:16:45.518 [2024-02-14 19:17:22.678166] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:16:45.518 [2024-02-14 19:17:22.678323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76817 ] 00:16:45.518 [2024-02-14 19:17:22.814867] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.777 [2024-02-14 19:17:22.979296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.350 19:17:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:46.350 19:17:23 -- common/autotest_common.sh@850 -- # return 0 00:16:46.350 19:17:23 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:46.609 [2024-02-14 19:17:23.911757] bdev_nvme_rpc.c: 478:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:46.609 [2024-02-14 19:17:23.918998] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:46.609 [2024-02-14 19:17:23.919053] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:46.609 [2024-02-14 19:17:23.919119] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:46.609 [2024-02-14 19:17:23.920071] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2105940 (107): Transport endpoint is not connected 00:16:46.609 [2024-02-14 19:17:23.921053] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2105940 (9): Bad file descriptor 00:16:46.609 [2024-02-14 19:17:23.922050] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:46.609 [2024-02-14 19:17:23.922079] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:46.609 [2024-02-14 19:17:23.922095] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:46.609 2024/02/14 19:17:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:46.609 request: 00:16:46.609 { 00:16:46.609 "method": "bdev_nvme_attach_controller", 00:16:46.609 "params": { 00:16:46.609 "name": "TLSTEST", 00:16:46.609 "trtype": "tcp", 00:16:46.609 "traddr": "10.0.0.2", 00:16:46.609 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:46.609 "adrfam": "ipv4", 00:16:46.609 "trsvcid": "4420", 00:16:46.609 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:46.609 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:16:46.609 } 00:16:46.609 } 00:16:46.609 Got JSON-RPC error response 00:16:46.609 GoRPCClient: error on JSON-RPC call 00:16:46.609 19:17:23 -- target/tls.sh@36 -- # killprocess 76817 00:16:46.609 19:17:23 -- common/autotest_common.sh@924 -- # '[' -z 76817 ']' 00:16:46.609 19:17:23 -- common/autotest_common.sh@928 -- # kill -0 76817 00:16:46.609 19:17:23 -- common/autotest_common.sh@929 -- # uname 00:16:46.609 19:17:23 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:46.609 19:17:23 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 76817 00:16:46.609 19:17:23 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:16:46.609 killing process with pid 76817 00:16:46.609 19:17:23 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:16:46.609 19:17:23 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 76817' 00:16:46.609 19:17:23 -- common/autotest_common.sh@943 -- # kill 76817 00:16:46.609 19:17:23 -- common/autotest_common.sh@948 -- # wait 76817 00:16:46.609 Received shutdown signal, test time was about 10.000000 seconds 00:16:46.609 00:16:46.609 Latency(us) 00:16:46.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.609 =================================================================================================================== 00:16:46.609 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:47.175 19:17:24 -- target/tls.sh@37 -- # return 1 00:16:47.175 19:17:24 -- common/autotest_common.sh@641 -- # es=1 00:16:47.175 19:17:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:47.175 19:17:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:47.175 19:17:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:47.175 19:17:24 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:47.175 19:17:24 -- common/autotest_common.sh@638 -- # local es=0 00:16:47.175 19:17:24 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:47.175 19:17:24 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:16:47.175 19:17:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:47.175 19:17:24 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:16:47.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:47.175 19:17:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:47.175 19:17:24 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:47.175 19:17:24 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:47.175 19:17:24 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:47.175 19:17:24 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:47.175 19:17:24 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:47.175 19:17:24 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:47.175 19:17:24 -- target/tls.sh@28 -- # bdevperf_pid=76868 00:16:47.175 19:17:24 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:47.175 19:17:24 -- target/tls.sh@31 -- # waitforlisten 76868 /var/tmp/bdevperf.sock 00:16:47.175 19:17:24 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:47.175 19:17:24 -- common/autotest_common.sh@817 -- # '[' -z 76868 ']' 00:16:47.175 19:17:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:47.175 19:17:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:47.175 19:17:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:47.175 19:17:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:47.175 19:17:24 -- common/autotest_common.sh@10 -- # set +x 00:16:47.175 [2024-02-14 19:17:24.457908] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:16:47.176 [2024-02-14 19:17:24.458328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76868 ] 00:16:47.434 [2024-02-14 19:17:24.597248] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.434 [2024-02-14 19:17:24.757041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:48.370 19:17:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:48.370 19:17:25 -- common/autotest_common.sh@850 -- # return 0 00:16:48.370 19:17:25 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:48.371 [2024-02-14 19:17:25.670746] bdev_nvme_rpc.c: 478:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:48.371 [2024-02-14 19:17:25.679374] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:48.371 [2024-02-14 19:17:25.680116] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:48.371 [2024-02-14 19:17:25.680292] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:48.371 [2024-02-14 19:17:25.680635] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e9940 (107): Transport endpoint is not connected 00:16:48.371 [2024-02-14 19:17:25.681632] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e9940 (9): Bad file descriptor 00:16:48.371 [2024-02-14 19:17:25.682637] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:48.371 [2024-02-14 19:17:25.682671] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:48.371 [2024-02-14 19:17:25.682689] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:48.371 2024/02/14 19:17:25 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:48.371 request: 00:16:48.371 { 00:16:48.371 "method": "bdev_nvme_attach_controller", 00:16:48.371 "params": { 00:16:48.371 "name": "TLSTEST", 00:16:48.371 "trtype": "tcp", 00:16:48.371 "traddr": "10.0.0.2", 00:16:48.371 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:48.371 "adrfam": "ipv4", 00:16:48.371 "trsvcid": "4420", 00:16:48.371 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:48.371 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:16:48.371 } 00:16:48.371 } 00:16:48.371 Got JSON-RPC error response 00:16:48.371 GoRPCClient: error on JSON-RPC call 00:16:48.371 19:17:25 -- target/tls.sh@36 -- # killprocess 76868 00:16:48.371 19:17:25 -- common/autotest_common.sh@924 -- # '[' -z 76868 ']' 00:16:48.371 19:17:25 -- common/autotest_common.sh@928 -- # kill -0 76868 00:16:48.371 19:17:25 -- common/autotest_common.sh@929 -- # uname 00:16:48.371 19:17:25 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:48.371 19:17:25 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 76868 00:16:48.371 19:17:25 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:16:48.371 19:17:25 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:16:48.371 killing process with pid 76868 00:16:48.371 19:17:25 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 76868' 00:16:48.371 19:17:25 -- common/autotest_common.sh@943 -- # kill 76868 00:16:48.371 19:17:25 -- common/autotest_common.sh@948 -- # wait 76868 00:16:48.371 Received shutdown signal, test time was about 10.000000 seconds 00:16:48.371 00:16:48.371 Latency(us) 00:16:48.371 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.371 =================================================================================================================== 00:16:48.371 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:48.939 19:17:26 -- target/tls.sh@37 -- # return 1 00:16:48.939 19:17:26 -- common/autotest_common.sh@641 -- # es=1 00:16:48.939 19:17:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:48.939 19:17:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:48.939 19:17:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:48.939 19:17:26 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:48.939 19:17:26 -- common/autotest_common.sh@638 -- # local es=0 00:16:48.939 19:17:26 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:48.939 19:17:26 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:16:48.939 19:17:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:48.939 19:17:26 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:16:48.939 19:17:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:48.939 19:17:26 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:48.939 19:17:26 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:48.939 19:17:26 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:48.939 19:17:26 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:48.939 19:17:26 -- target/tls.sh@23 -- # psk= 00:16:48.939 19:17:26 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:48.939 19:17:26 -- target/tls.sh@28 -- # bdevperf_pid=76914 00:16:48.939 19:17:26 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:48.939 19:17:26 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:48.939 19:17:26 -- target/tls.sh@31 -- # waitforlisten 76914 /var/tmp/bdevperf.sock 00:16:48.939 19:17:26 -- common/autotest_common.sh@817 -- # '[' -z 76914 ']' 00:16:48.939 19:17:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:48.939 19:17:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:48.939 19:17:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:48.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:48.939 19:17:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:48.939 19:17:26 -- common/autotest_common.sh@10 -- # set +x 00:16:48.940 [2024-02-14 19:17:26.218139] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:16:48.940 [2024-02-14 19:17:26.218265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76914 ] 00:16:49.198 [2024-02-14 19:17:26.357529] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.198 [2024-02-14 19:17:26.523467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:50.134 19:17:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:50.134 19:17:27 -- common/autotest_common.sh@850 -- # return 0 00:16:50.134 19:17:27 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:50.134 [2024-02-14 19:17:27.410010] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:50.134 [2024-02-14 19:17:27.411924] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd093e0 (9): Bad file descriptor 00:16:50.134 [2024-02-14 19:17:27.412914] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:50.135 [2024-02-14 19:17:27.412946] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:50.135 [2024-02-14 19:17:27.412963] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:50.135 2024/02/14 19:17:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:50.135 request: 00:16:50.135 { 00:16:50.135 "method": "bdev_nvme_attach_controller", 00:16:50.135 "params": { 00:16:50.135 "name": "TLSTEST", 00:16:50.135 "trtype": "tcp", 00:16:50.135 "traddr": "10.0.0.2", 00:16:50.135 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:50.135 "adrfam": "ipv4", 00:16:50.135 "trsvcid": "4420", 00:16:50.135 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:16:50.135 } 00:16:50.135 } 00:16:50.135 Got JSON-RPC error response 00:16:50.135 GoRPCClient: error on JSON-RPC call 00:16:50.135 19:17:27 -- target/tls.sh@36 -- # killprocess 76914 00:16:50.135 19:17:27 -- common/autotest_common.sh@924 -- # '[' -z 76914 ']' 00:16:50.135 19:17:27 -- common/autotest_common.sh@928 -- # kill -0 76914 00:16:50.135 19:17:27 -- common/autotest_common.sh@929 -- # uname 00:16:50.135 19:17:27 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:50.135 19:17:27 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 76914 00:16:50.135 19:17:27 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:16:50.135 killing process with pid 76914 00:16:50.135 19:17:27 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:16:50.135 19:17:27 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 76914' 00:16:50.135 19:17:27 -- common/autotest_common.sh@943 -- # kill 76914 00:16:50.135 19:17:27 -- common/autotest_common.sh@948 -- # wait 76914 00:16:50.135 Received shutdown signal, test time was about 10.000000 seconds 00:16:50.135 00:16:50.135 Latency(us) 00:16:50.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.135 =================================================================================================================== 00:16:50.135 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:50.702 19:17:27 -- target/tls.sh@37 -- # return 1 00:16:50.702 19:17:27 -- common/autotest_common.sh@641 -- # es=1 00:16:50.702 19:17:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:50.702 19:17:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:50.702 19:17:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:50.702 19:17:27 -- target/tls.sh@167 -- # killprocess 76253 00:16:50.702 19:17:27 -- common/autotest_common.sh@924 -- # '[' -z 76253 ']' 00:16:50.702 19:17:27 -- common/autotest_common.sh@928 -- # kill -0 76253 00:16:50.702 19:17:27 -- common/autotest_common.sh@929 -- # uname 00:16:50.702 19:17:27 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:50.702 19:17:27 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 76253 00:16:50.702 19:17:27 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:16:50.702 killing process with pid 76253 00:16:50.702 19:17:27 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:16:50.702 19:17:27 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 76253' 00:16:50.702 19:17:27 -- common/autotest_common.sh@943 -- # kill 76253 00:16:50.702 19:17:27 -- common/autotest_common.sh@948 -- # wait 76253 00:16:50.971 19:17:28 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:16:50.971 19:17:28 -- target/tls.sh@49 -- # local key hash crc 00:16:50.971 19:17:28 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:50.971 19:17:28 -- target/tls.sh@51 -- # hash=02 00:16:50.971 19:17:28 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:16:50.971 19:17:28 -- target/tls.sh@52 -- # tail -c8 00:16:50.971 19:17:28 -- target/tls.sh@52 -- # gzip -1 -c 00:16:50.971 19:17:28 -- target/tls.sh@52 -- # head -c 4 00:16:50.971 19:17:28 -- target/tls.sh@52 -- # crc='�e�'\''' 00:16:50.971 19:17:28 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:50.971 19:17:28 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:16:50.971 19:17:28 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:50.971 19:17:28 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:50.971 19:17:28 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:50.971 19:17:28 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:50.971 19:17:28 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:50.971 19:17:28 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:16:50.971 19:17:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:50.971 19:17:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:50.971 19:17:28 -- common/autotest_common.sh@10 -- # set +x 00:16:50.971 19:17:28 -- nvmf/common.sh@469 -- # nvmfpid=76980 00:16:50.971 19:17:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:50.971 19:17:28 -- nvmf/common.sh@470 -- # waitforlisten 76980 00:16:50.971 19:17:28 -- common/autotest_common.sh@817 -- # '[' -z 76980 ']' 00:16:50.971 19:17:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.971 19:17:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:50.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.971 19:17:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.971 19:17:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:50.971 19:17:28 -- common/autotest_common.sh@10 -- # set +x 00:16:50.971 [2024-02-14 19:17:28.245977] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:16:50.971 [2024-02-14 19:17:28.246076] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.249 [2024-02-14 19:17:28.381430] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.249 [2024-02-14 19:17:28.504748] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:51.249 [2024-02-14 19:17:28.504902] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.249 [2024-02-14 19:17:28.504915] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.249 [2024-02-14 19:17:28.504924] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.249 [2024-02-14 19:17:28.504966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.185 19:17:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:52.185 19:17:29 -- common/autotest_common.sh@850 -- # return 0 00:16:52.185 19:17:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:52.185 19:17:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:52.185 19:17:29 -- common/autotest_common.sh@10 -- # set +x 00:16:52.185 19:17:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.185 19:17:29 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:52.186 19:17:29 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:52.186 19:17:29 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:52.186 [2024-02-14 19:17:29.550612] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:52.186 19:17:29 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:52.444 19:17:29 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:52.703 [2024-02-14 19:17:30.034712] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:52.703 [2024-02-14 19:17:30.035057] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.703 19:17:30 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:52.962 malloc0 00:16:52.962 19:17:30 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:53.221 19:17:30 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:53.480 19:17:30 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:53.480 19:17:30 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:53.480 19:17:30 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:53.480 19:17:30 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:53.480 19:17:30 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:16:53.480 19:17:30 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:53.480 19:17:30 -- target/tls.sh@28 -- # bdevperf_pid=77077 00:16:53.480 19:17:30 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:53.480 19:17:30 -- target/tls.sh@31 -- # waitforlisten 77077 /var/tmp/bdevperf.sock 00:16:53.480 19:17:30 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:53.480 19:17:30 -- common/autotest_common.sh@817 -- # '[' -z 77077 ']' 00:16:53.480 19:17:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:53.480 19:17:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:53.480 19:17:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:53.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:53.480 19:17:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:53.480 19:17:30 -- common/autotest_common.sh@10 -- # set +x 00:16:53.745 [2024-02-14 19:17:30.941182] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:16:53.745 [2024-02-14 19:17:30.941314] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77077 ] 00:16:53.745 [2024-02-14 19:17:31.087047] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.006 [2024-02-14 19:17:31.260389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.573 19:17:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:54.573 19:17:31 -- common/autotest_common.sh@850 -- # return 0 00:16:54.573 19:17:31 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:54.832 [2024-02-14 19:17:32.153880] bdev_nvme_rpc.c: 478:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:54.832 TLSTESTn1 00:16:54.832 19:17:32 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:55.090 Running I/O for 10 seconds... 00:17:05.065 00:17:05.065 Latency(us) 00:17:05.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.065 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:05.065 Verification LBA range: start 0x0 length 0x2000 00:17:05.065 TLSTESTn1 : 10.01 5787.47 22.61 0.00 0.00 22084.38 2383.13 24903.68 00:17:05.065 =================================================================================================================== 00:17:05.065 Total : 5787.47 22.61 0.00 0.00 22084.38 2383.13 24903.68 00:17:05.065 0 00:17:05.065 19:17:42 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:05.065 19:17:42 -- target/tls.sh@45 -- # killprocess 77077 00:17:05.065 19:17:42 -- common/autotest_common.sh@924 -- # '[' -z 77077 ']' 00:17:05.065 19:17:42 -- common/autotest_common.sh@928 -- # kill -0 77077 00:17:05.065 19:17:42 -- common/autotest_common.sh@929 -- # uname 00:17:05.065 19:17:42 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:17:05.065 19:17:42 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 77077 00:17:05.065 19:17:42 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:17:05.065 19:17:42 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:17:05.065 19:17:42 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 77077' 00:17:05.065 killing process with pid 77077 00:17:05.065 19:17:42 -- common/autotest_common.sh@943 -- # kill 77077 00:17:05.065 19:17:42 -- common/autotest_common.sh@948 -- # wait 77077 00:17:05.065 Received shutdown signal, test time was about 10.000000 seconds 00:17:05.065 00:17:05.065 Latency(us) 00:17:05.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.065 =================================================================================================================== 00:17:05.065 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:05.326 19:17:42 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:05.326 19:17:42 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:05.326 19:17:42 -- common/autotest_common.sh@638 -- # local es=0 00:17:05.326 19:17:42 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:05.326 19:17:42 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:05.326 19:17:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:05.326 19:17:42 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:05.326 19:17:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:05.326 19:17:42 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:05.326 19:17:42 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:05.326 19:17:42 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:05.326 19:17:42 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:05.326 19:17:42 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:05.326 19:17:42 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:05.326 19:17:42 -- target/tls.sh@28 -- # bdevperf_pid=77230 00:17:05.326 19:17:42 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:05.326 19:17:42 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:05.326 19:17:42 -- target/tls.sh@31 -- # waitforlisten 77230 /var/tmp/bdevperf.sock 00:17:05.326 19:17:42 -- common/autotest_common.sh@817 -- # '[' -z 77230 ']' 00:17:05.326 19:17:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:05.326 19:17:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:05.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:05.326 19:17:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:05.326 19:17:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:05.326 19:17:42 -- common/autotest_common.sh@10 -- # set +x 00:17:05.326 [2024-02-14 19:17:42.740964] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:17:05.326 [2024-02-14 19:17:42.741066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77230 ] 00:17:05.584 [2024-02-14 19:17:42.874731] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.584 [2024-02-14 19:17:42.993788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.519 19:17:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:06.519 19:17:43 -- common/autotest_common.sh@850 -- # return 0 00:17:06.520 19:17:43 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:06.778 [2024-02-14 19:17:43.952993] bdev_nvme_rpc.c: 478:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:06.778 [2024-02-14 19:17:43.953058] bdev_nvme_rpc.c: 337:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:06.778 2024/02/14 19:17:43 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:06.778 request: 00:17:06.778 { 00:17:06.778 "method": "bdev_nvme_attach_controller", 00:17:06.778 "params": { 00:17:06.778 "name": "TLSTEST", 00:17:06.778 "trtype": "tcp", 00:17:06.778 "traddr": "10.0.0.2", 00:17:06.778 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:06.778 "adrfam": "ipv4", 00:17:06.778 "trsvcid": "4420", 00:17:06.778 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:06.778 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:06.778 } 00:17:06.778 } 00:17:06.778 Got JSON-RPC error response 00:17:06.778 GoRPCClient: error on JSON-RPC call 00:17:06.778 19:17:43 -- target/tls.sh@36 -- # killprocess 77230 00:17:06.778 19:17:43 -- common/autotest_common.sh@924 -- # '[' -z 77230 ']' 00:17:06.778 19:17:43 -- common/autotest_common.sh@928 -- # kill -0 77230 00:17:06.778 19:17:43 -- common/autotest_common.sh@929 -- # uname 00:17:06.778 19:17:43 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:17:06.778 19:17:43 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 77230 00:17:06.778 19:17:44 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:17:06.778 killing process with pid 77230 00:17:06.778 19:17:44 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:17:06.778 19:17:44 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 77230' 00:17:06.778 19:17:44 -- common/autotest_common.sh@943 -- # kill 77230 00:17:06.778 Received shutdown signal, test time was about 10.000000 seconds 00:17:06.778 00:17:06.778 Latency(us) 00:17:06.778 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.778 =================================================================================================================== 00:17:06.778 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:06.778 19:17:44 -- common/autotest_common.sh@948 -- # wait 77230 00:17:07.037 19:17:44 -- target/tls.sh@37 -- # return 1 00:17:07.037 19:17:44 -- common/autotest_common.sh@641 -- # es=1 00:17:07.037 19:17:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:07.037 19:17:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:07.037 19:17:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:07.037 19:17:44 -- target/tls.sh@183 -- # killprocess 76980 00:17:07.037 19:17:44 -- common/autotest_common.sh@924 -- # '[' -z 76980 ']' 00:17:07.037 19:17:44 -- common/autotest_common.sh@928 -- # kill -0 76980 00:17:07.037 19:17:44 -- common/autotest_common.sh@929 -- # uname 00:17:07.037 19:17:44 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:17:07.037 19:17:44 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 76980 00:17:07.037 19:17:44 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:17:07.037 killing process with pid 76980 00:17:07.037 19:17:44 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:17:07.037 19:17:44 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 76980' 00:17:07.037 19:17:44 -- common/autotest_common.sh@943 -- # kill 76980 00:17:07.037 19:17:44 -- common/autotest_common.sh@948 -- # wait 76980 00:17:07.295 19:17:44 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:07.295 19:17:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:07.295 19:17:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:07.295 19:17:44 -- common/autotest_common.sh@10 -- # set +x 00:17:07.295 19:17:44 -- nvmf/common.sh@469 -- # nvmfpid=77286 00:17:07.295 19:17:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:07.295 19:17:44 -- nvmf/common.sh@470 -- # waitforlisten 77286 00:17:07.295 19:17:44 -- common/autotest_common.sh@817 -- # '[' -z 77286 ']' 00:17:07.295 19:17:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.295 19:17:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:07.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.295 19:17:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.295 19:17:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:07.295 19:17:44 -- common/autotest_common.sh@10 -- # set +x 00:17:07.295 [2024-02-14 19:17:44.662509] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:17:07.295 [2024-02-14 19:17:44.662630] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:07.554 [2024-02-14 19:17:44.801008] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.554 [2024-02-14 19:17:44.928231] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:07.554 [2024-02-14 19:17:44.928393] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:07.554 [2024-02-14 19:17:44.928407] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:07.554 [2024-02-14 19:17:44.928417] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:07.554 [2024-02-14 19:17:44.928443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.490 19:17:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:08.490 19:17:45 -- common/autotest_common.sh@850 -- # return 0 00:17:08.490 19:17:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:08.490 19:17:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:08.490 19:17:45 -- common/autotest_common.sh@10 -- # set +x 00:17:08.490 19:17:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.490 19:17:45 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:08.490 19:17:45 -- common/autotest_common.sh@638 -- # local es=0 00:17:08.490 19:17:45 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:08.490 19:17:45 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:17:08.490 19:17:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:08.490 19:17:45 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:17:08.490 19:17:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:08.490 19:17:45 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:08.490 19:17:45 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:08.490 19:17:45 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:08.490 [2024-02-14 19:17:45.889331] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.749 19:17:45 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:09.008 19:17:46 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:09.267 [2024-02-14 19:17:46.433461] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:09.267 [2024-02-14 19:17:46.433700] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.267 19:17:46 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:09.526 malloc0 00:17:09.526 19:17:46 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:09.785 19:17:46 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:10.044 [2024-02-14 19:17:47.232981] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:10.044 [2024-02-14 19:17:47.233029] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:10.044 [2024-02-14 19:17:47.233049] subsystem.c: 840:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:10.044 2024/02/14 19:17:47 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:17:10.044 request: 00:17:10.044 { 00:17:10.044 "method": "nvmf_subsystem_add_host", 00:17:10.044 "params": { 00:17:10.044 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.044 "host": "nqn.2016-06.io.spdk:host1", 00:17:10.044 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:10.044 } 00:17:10.044 } 00:17:10.044 Got JSON-RPC error response 00:17:10.044 GoRPCClient: error on JSON-RPC call 00:17:10.044 19:17:47 -- common/autotest_common.sh@641 -- # es=1 00:17:10.044 19:17:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:10.044 19:17:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:10.044 19:17:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:10.044 19:17:47 -- target/tls.sh@189 -- # killprocess 77286 00:17:10.044 19:17:47 -- common/autotest_common.sh@924 -- # '[' -z 77286 ']' 00:17:10.044 19:17:47 -- common/autotest_common.sh@928 -- # kill -0 77286 00:17:10.044 19:17:47 -- common/autotest_common.sh@929 -- # uname 00:17:10.044 19:17:47 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:17:10.044 19:17:47 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 77286 00:17:10.044 19:17:47 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:17:10.044 killing process with pid 77286 00:17:10.044 19:17:47 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:17:10.044 19:17:47 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 77286' 00:17:10.044 19:17:47 -- common/autotest_common.sh@943 -- # kill 77286 00:17:10.044 19:17:47 -- common/autotest_common.sh@948 -- # wait 77286 00:17:10.303 19:17:47 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:10.303 19:17:47 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:17:10.303 19:17:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:10.303 19:17:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:10.303 19:17:47 -- common/autotest_common.sh@10 -- # set +x 00:17:10.303 19:17:47 -- nvmf/common.sh@469 -- # nvmfpid=77398 00:17:10.303 19:17:47 -- nvmf/common.sh@470 -- # waitforlisten 77398 00:17:10.303 19:17:47 -- common/autotest_common.sh@817 -- # '[' -z 77398 ']' 00:17:10.303 19:17:47 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:10.303 19:17:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.303 19:17:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:10.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.303 19:17:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.303 19:17:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:10.303 19:17:47 -- common/autotest_common.sh@10 -- # set +x 00:17:10.303 [2024-02-14 19:17:47.616283] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:17:10.303 [2024-02-14 19:17:47.616412] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:10.562 [2024-02-14 19:17:47.756141] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.562 [2024-02-14 19:17:47.872472] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:10.562 [2024-02-14 19:17:47.872654] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.562 [2024-02-14 19:17:47.872668] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.562 [2024-02-14 19:17:47.872677] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.562 [2024-02-14 19:17:47.872709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.498 19:17:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:11.498 19:17:48 -- common/autotest_common.sh@850 -- # return 0 00:17:11.498 19:17:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:11.498 19:17:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:11.498 19:17:48 -- common/autotest_common.sh@10 -- # set +x 00:17:11.498 19:17:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.498 19:17:48 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:11.498 19:17:48 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:11.498 19:17:48 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:11.498 [2024-02-14 19:17:48.836780] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:11.498 19:17:48 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:11.757 19:17:49 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:12.016 [2024-02-14 19:17:49.308942] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:12.016 [2024-02-14 19:17:49.309198] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:12.016 19:17:49 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:12.274 malloc0 00:17:12.274 19:17:49 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:12.532 19:17:49 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:12.790 19:17:50 -- target/tls.sh@197 -- # bdevperf_pid=77499 00:17:12.790 19:17:50 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:12.790 19:17:50 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:12.790 19:17:50 -- target/tls.sh@200 -- # waitforlisten 77499 /var/tmp/bdevperf.sock 00:17:12.790 19:17:50 -- common/autotest_common.sh@817 -- # '[' -z 77499 ']' 00:17:12.790 19:17:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:12.790 19:17:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:12.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:12.790 19:17:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:12.790 19:17:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:12.790 19:17:50 -- common/autotest_common.sh@10 -- # set +x 00:17:12.790 [2024-02-14 19:17:50.131829] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:17:12.790 [2024-02-14 19:17:50.131939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77499 ] 00:17:13.049 [2024-02-14 19:17:50.273603] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.049 [2024-02-14 19:17:50.396369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:13.982 19:17:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:13.982 19:17:51 -- common/autotest_common.sh@850 -- # return 0 00:17:13.982 19:17:51 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:13.982 [2024-02-14 19:17:51.273574] bdev_nvme_rpc.c: 478:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:13.982 TLSTESTn1 00:17:13.982 19:17:51 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:14.240 19:17:51 -- target/tls.sh@205 -- # tgtconf='{ 00:17:14.240 "subsystems": [ 00:17:14.240 { 00:17:14.240 "subsystem": "iobuf", 00:17:14.240 "config": [ 00:17:14.240 { 00:17:14.240 "method": "iobuf_set_options", 00:17:14.240 "params": { 00:17:14.240 "large_bufsize": 135168, 00:17:14.240 "large_pool_count": 1024, 00:17:14.240 "small_bufsize": 8192, 00:17:14.240 "small_pool_count": 8192 00:17:14.240 } 00:17:14.240 } 00:17:14.240 ] 00:17:14.240 }, 00:17:14.240 { 00:17:14.240 "subsystem": "sock", 00:17:14.240 "config": [ 00:17:14.240 { 00:17:14.240 "method": "sock_impl_set_options", 00:17:14.240 "params": { 00:17:14.240 "enable_ktls": false, 00:17:14.240 "enable_placement_id": 0, 00:17:14.240 "enable_quickack": false, 00:17:14.240 "enable_recv_pipe": true, 00:17:14.240 "enable_zerocopy_send_client": false, 00:17:14.240 "enable_zerocopy_send_server": true, 00:17:14.240 "impl_name": "posix", 00:17:14.240 "recv_buf_size": 2097152, 00:17:14.240 "send_buf_size": 2097152, 00:17:14.240 "tls_version": 0, 00:17:14.240 "zerocopy_threshold": 0 00:17:14.240 } 00:17:14.240 }, 00:17:14.240 { 00:17:14.240 "method": "sock_impl_set_options", 00:17:14.240 "params": { 00:17:14.240 "enable_ktls": false, 00:17:14.240 "enable_placement_id": 0, 00:17:14.240 "enable_quickack": false, 00:17:14.240 "enable_recv_pipe": true, 00:17:14.240 "enable_zerocopy_send_client": false, 00:17:14.240 "enable_zerocopy_send_server": true, 00:17:14.240 "impl_name": "ssl", 00:17:14.240 "recv_buf_size": 4096, 00:17:14.240 "send_buf_size": 4096, 00:17:14.240 "tls_version": 0, 00:17:14.240 "zerocopy_threshold": 0 00:17:14.240 } 00:17:14.240 } 00:17:14.240 ] 00:17:14.240 }, 00:17:14.240 { 00:17:14.240 "subsystem": "vmd", 00:17:14.240 "config": [] 00:17:14.240 }, 00:17:14.240 { 00:17:14.240 "subsystem": "accel", 00:17:14.240 "config": [ 00:17:14.240 { 00:17:14.240 "method": "accel_set_options", 00:17:14.240 "params": { 00:17:14.240 "buf_count": 2048, 00:17:14.240 "large_cache_size": 16, 00:17:14.240 "sequence_count": 2048, 00:17:14.240 "small_cache_size": 128, 00:17:14.240 "task_count": 2048 00:17:14.240 } 00:17:14.240 } 00:17:14.240 ] 00:17:14.240 }, 00:17:14.240 { 00:17:14.240 "subsystem": "bdev", 00:17:14.240 "config": [ 00:17:14.240 { 00:17:14.240 "method": "bdev_set_options", 00:17:14.240 "params": { 00:17:14.240 "bdev_auto_examine": true, 00:17:14.240 "bdev_io_cache_size": 256, 00:17:14.240 "bdev_io_pool_size": 65535, 00:17:14.240 "iobuf_large_cache_size": 16, 00:17:14.240 "iobuf_small_cache_size": 128 00:17:14.240 } 00:17:14.240 }, 00:17:14.240 { 00:17:14.240 "method": "bdev_raid_set_options", 00:17:14.240 "params": { 00:17:14.240 "process_window_size_kb": 1024 00:17:14.240 } 00:17:14.240 }, 00:17:14.240 { 00:17:14.240 "method": "bdev_iscsi_set_options", 00:17:14.240 "params": { 00:17:14.240 "timeout_sec": 30 00:17:14.240 } 00:17:14.240 }, 00:17:14.240 { 00:17:14.240 "method": "bdev_nvme_set_options", 00:17:14.240 "params": { 00:17:14.240 "action_on_timeout": "none", 00:17:14.240 "allow_accel_sequence": false, 00:17:14.240 "arbitration_burst": 0, 00:17:14.240 "bdev_retry_count": 3, 00:17:14.240 "ctrlr_loss_timeout_sec": 0, 00:17:14.240 "delay_cmd_submit": true, 00:17:14.240 "disable_auto_failback": false, 00:17:14.240 "fast_io_fail_timeout_sec": 0, 00:17:14.240 "generate_uuids": false, 00:17:14.240 "high_priority_weight": 0, 00:17:14.240 "io_path_stat": false, 00:17:14.240 "io_queue_requests": 0, 00:17:14.240 "keep_alive_timeout_ms": 10000, 00:17:14.240 "low_priority_weight": 0, 00:17:14.240 "medium_priority_weight": 0, 00:17:14.240 "nvme_adminq_poll_period_us": 10000, 00:17:14.240 "nvme_error_stat": false, 00:17:14.240 "nvme_ioq_poll_period_us": 0, 00:17:14.240 "rdma_cm_event_timeout_ms": 0, 00:17:14.241 "rdma_max_cq_size": 0, 00:17:14.241 "rdma_srq_size": 0, 00:17:14.241 "reconnect_delay_sec": 0, 00:17:14.241 "timeout_admin_us": 0, 00:17:14.241 "timeout_us": 0, 00:17:14.241 "transport_ack_timeout": 0, 00:17:14.241 "transport_retry_count": 4, 00:17:14.241 "transport_tos": 0 00:17:14.241 } 00:17:14.241 }, 00:17:14.241 { 00:17:14.241 "method": "bdev_nvme_set_hotplug", 00:17:14.241 "params": { 00:17:14.241 "enable": false, 00:17:14.241 "period_us": 100000 00:17:14.241 } 00:17:14.241 }, 00:17:14.241 { 00:17:14.241 "method": "bdev_malloc_create", 00:17:14.241 "params": { 00:17:14.241 "block_size": 4096, 00:17:14.241 "name": "malloc0", 00:17:14.241 "num_blocks": 8192, 00:17:14.241 "optimal_io_boundary": 0, 00:17:14.241 "physical_block_size": 4096, 00:17:14.241 "uuid": "bd167dae-13d5-4d99-9a8d-b4fe8a9eda7c" 00:17:14.241 } 00:17:14.241 }, 00:17:14.241 { 00:17:14.241 "method": "bdev_wait_for_examine" 00:17:14.241 } 00:17:14.241 ] 00:17:14.241 }, 00:17:14.241 { 00:17:14.241 "subsystem": "nbd", 00:17:14.241 "config": [] 00:17:14.241 }, 00:17:14.241 { 00:17:14.241 "subsystem": "scheduler", 00:17:14.241 "config": [ 00:17:14.241 { 00:17:14.241 "method": "framework_set_scheduler", 00:17:14.241 "params": { 00:17:14.241 "name": "static" 00:17:14.241 } 00:17:14.241 } 00:17:14.241 ] 00:17:14.241 }, 00:17:14.241 { 00:17:14.241 "subsystem": "nvmf", 00:17:14.241 "config": [ 00:17:14.241 { 00:17:14.241 "method": "nvmf_set_config", 00:17:14.241 "params": { 00:17:14.241 "admin_cmd_passthru": { 00:17:14.241 "identify_ctrlr": false 00:17:14.241 }, 00:17:14.241 "discovery_filter": "match_any" 00:17:14.241 } 00:17:14.241 }, 00:17:14.241 { 00:17:14.241 "method": "nvmf_set_max_subsystems", 00:17:14.241 "params": { 00:17:14.241 "max_subsystems": 1024 00:17:14.241 } 00:17:14.241 }, 00:17:14.241 { 00:17:14.241 "method": "nvmf_set_crdt", 00:17:14.241 "params": { 00:17:14.241 "crdt1": 0, 00:17:14.241 "crdt2": 0, 00:17:14.241 "crdt3": 0 00:17:14.241 } 00:17:14.241 }, 00:17:14.241 { 00:17:14.241 "method": "nvmf_create_transport", 00:17:14.241 "params": { 00:17:14.241 "abort_timeout_sec": 1, 00:17:14.241 "buf_cache_size": 4294967295, 00:17:14.241 "c2h_success": false, 00:17:14.241 "dif_insert_or_strip": false, 00:17:14.241 "in_capsule_data_size": 4096, 00:17:14.241 "io_unit_size": 131072, 00:17:14.241 "max_aq_depth": 128, 00:17:14.241 "max_io_qpairs_per_ctrlr": 127, 00:17:14.241 "max_io_size": 131072, 00:17:14.241 "max_queue_depth": 128, 00:17:14.241 "num_shared_buffers": 511, 00:17:14.241 "sock_priority": 0, 00:17:14.241 "trtype": "TCP", 00:17:14.241 "zcopy": false 00:17:14.241 } 00:17:14.241 }, 00:17:14.241 { 00:17:14.241 "method": "nvmf_create_subsystem", 00:17:14.241 "params": { 00:17:14.241 "allow_any_host": false, 00:17:14.241 "ana_reporting": false, 00:17:14.241 "max_cntlid": 65519, 00:17:14.241 "max_namespaces": 10, 00:17:14.241 "min_cntlid": 1, 00:17:14.241 "model_number": "SPDK bdev Controller", 00:17:14.241 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:14.241 "serial_number": "SPDK00000000000001" 00:17:14.241 } 00:17:14.241 }, 00:17:14.241 { 00:17:14.241 "method": "nvmf_subsystem_add_host", 00:17:14.241 "params": { 00:17:14.241 "host": "nqn.2016-06.io.spdk:host1", 00:17:14.241 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:14.241 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:14.241 } 00:17:14.241 }, 00:17:14.241 { 00:17:14.241 "method": "nvmf_subsystem_add_ns", 00:17:14.241 "params": { 00:17:14.241 "namespace": { 00:17:14.241 "bdev_name": "malloc0", 00:17:14.241 "nguid": "BD167DAE13D54D999A8DB4FE8A9EDA7C", 00:17:14.241 "nsid": 1, 00:17:14.241 "uuid": "bd167dae-13d5-4d99-9a8d-b4fe8a9eda7c" 00:17:14.241 }, 00:17:14.241 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:14.241 } 00:17:14.241 }, 00:17:14.241 { 00:17:14.241 "method": "nvmf_subsystem_add_listener", 00:17:14.241 "params": { 00:17:14.241 "listen_address": { 00:17:14.241 "adrfam": "IPv4", 00:17:14.241 "traddr": "10.0.0.2", 00:17:14.241 "trsvcid": "4420", 00:17:14.241 "trtype": "TCP" 00:17:14.241 }, 00:17:14.241 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:14.241 "secure_channel": true 00:17:14.241 } 00:17:14.241 } 00:17:14.241 ] 00:17:14.241 } 00:17:14.241 ] 00:17:14.241 }' 00:17:14.241 19:17:51 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:14.807 19:17:51 -- target/tls.sh@206 -- # bdevperfconf='{ 00:17:14.807 "subsystems": [ 00:17:14.807 { 00:17:14.807 "subsystem": "iobuf", 00:17:14.807 "config": [ 00:17:14.807 { 00:17:14.807 "method": "iobuf_set_options", 00:17:14.807 "params": { 00:17:14.807 "large_bufsize": 135168, 00:17:14.807 "large_pool_count": 1024, 00:17:14.807 "small_bufsize": 8192, 00:17:14.807 "small_pool_count": 8192 00:17:14.807 } 00:17:14.807 } 00:17:14.807 ] 00:17:14.807 }, 00:17:14.807 { 00:17:14.807 "subsystem": "sock", 00:17:14.807 "config": [ 00:17:14.807 { 00:17:14.807 "method": "sock_impl_set_options", 00:17:14.807 "params": { 00:17:14.807 "enable_ktls": false, 00:17:14.807 "enable_placement_id": 0, 00:17:14.807 "enable_quickack": false, 00:17:14.807 "enable_recv_pipe": true, 00:17:14.807 "enable_zerocopy_send_client": false, 00:17:14.807 "enable_zerocopy_send_server": true, 00:17:14.807 "impl_name": "posix", 00:17:14.807 "recv_buf_size": 2097152, 00:17:14.807 "send_buf_size": 2097152, 00:17:14.807 "tls_version": 0, 00:17:14.807 "zerocopy_threshold": 0 00:17:14.807 } 00:17:14.807 }, 00:17:14.807 { 00:17:14.807 "method": "sock_impl_set_options", 00:17:14.807 "params": { 00:17:14.807 "enable_ktls": false, 00:17:14.807 "enable_placement_id": 0, 00:17:14.807 "enable_quickack": false, 00:17:14.807 "enable_recv_pipe": true, 00:17:14.807 "enable_zerocopy_send_client": false, 00:17:14.807 "enable_zerocopy_send_server": true, 00:17:14.807 "impl_name": "ssl", 00:17:14.807 "recv_buf_size": 4096, 00:17:14.807 "send_buf_size": 4096, 00:17:14.807 "tls_version": 0, 00:17:14.807 "zerocopy_threshold": 0 00:17:14.807 } 00:17:14.807 } 00:17:14.807 ] 00:17:14.807 }, 00:17:14.807 { 00:17:14.807 "subsystem": "vmd", 00:17:14.807 "config": [] 00:17:14.807 }, 00:17:14.807 { 00:17:14.807 "subsystem": "accel", 00:17:14.807 "config": [ 00:17:14.807 { 00:17:14.807 "method": "accel_set_options", 00:17:14.807 "params": { 00:17:14.807 "buf_count": 2048, 00:17:14.807 "large_cache_size": 16, 00:17:14.807 "sequence_count": 2048, 00:17:14.807 "small_cache_size": 128, 00:17:14.807 "task_count": 2048 00:17:14.807 } 00:17:14.807 } 00:17:14.807 ] 00:17:14.807 }, 00:17:14.807 { 00:17:14.807 "subsystem": "bdev", 00:17:14.807 "config": [ 00:17:14.807 { 00:17:14.807 "method": "bdev_set_options", 00:17:14.808 "params": { 00:17:14.808 "bdev_auto_examine": true, 00:17:14.808 "bdev_io_cache_size": 256, 00:17:14.808 "bdev_io_pool_size": 65535, 00:17:14.808 "iobuf_large_cache_size": 16, 00:17:14.808 "iobuf_small_cache_size": 128 00:17:14.808 } 00:17:14.808 }, 00:17:14.808 { 00:17:14.808 "method": "bdev_raid_set_options", 00:17:14.808 "params": { 00:17:14.808 "process_window_size_kb": 1024 00:17:14.808 } 00:17:14.808 }, 00:17:14.808 { 00:17:14.808 "method": "bdev_iscsi_set_options", 00:17:14.808 "params": { 00:17:14.808 "timeout_sec": 30 00:17:14.808 } 00:17:14.808 }, 00:17:14.808 { 00:17:14.808 "method": "bdev_nvme_set_options", 00:17:14.808 "params": { 00:17:14.808 "action_on_timeout": "none", 00:17:14.808 "allow_accel_sequence": false, 00:17:14.808 "arbitration_burst": 0, 00:17:14.808 "bdev_retry_count": 3, 00:17:14.808 "ctrlr_loss_timeout_sec": 0, 00:17:14.808 "delay_cmd_submit": true, 00:17:14.808 "disable_auto_failback": false, 00:17:14.808 "fast_io_fail_timeout_sec": 0, 00:17:14.808 "generate_uuids": false, 00:17:14.808 "high_priority_weight": 0, 00:17:14.808 "io_path_stat": false, 00:17:14.808 "io_queue_requests": 512, 00:17:14.808 "keep_alive_timeout_ms": 10000, 00:17:14.808 "low_priority_weight": 0, 00:17:14.808 "medium_priority_weight": 0, 00:17:14.808 "nvme_adminq_poll_period_us": 10000, 00:17:14.808 "nvme_error_stat": false, 00:17:14.808 "nvme_ioq_poll_period_us": 0, 00:17:14.808 "rdma_cm_event_timeout_ms": 0, 00:17:14.808 "rdma_max_cq_size": 0, 00:17:14.808 "rdma_srq_size": 0, 00:17:14.808 "reconnect_delay_sec": 0, 00:17:14.808 "timeout_admin_us": 0, 00:17:14.808 "timeout_us": 0, 00:17:14.808 "transport_ack_timeout": 0, 00:17:14.808 "transport_retry_count": 4, 00:17:14.808 "transport_tos": 0 00:17:14.808 } 00:17:14.808 }, 00:17:14.808 { 00:17:14.808 "method": "bdev_nvme_attach_controller", 00:17:14.808 "params": { 00:17:14.808 "adrfam": "IPv4", 00:17:14.808 "ctrlr_loss_timeout_sec": 0, 00:17:14.808 "ddgst": false, 00:17:14.808 "fast_io_fail_timeout_sec": 0, 00:17:14.808 "hdgst": false, 00:17:14.808 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:14.808 "name": "TLSTEST", 00:17:14.808 "prchk_guard": false, 00:17:14.808 "prchk_reftag": false, 00:17:14.808 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:14.808 "reconnect_delay_sec": 0, 00:17:14.808 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:14.808 "traddr": "10.0.0.2", 00:17:14.808 "trsvcid": "4420", 00:17:14.808 "trtype": "TCP" 00:17:14.808 } 00:17:14.808 }, 00:17:14.808 { 00:17:14.808 "method": "bdev_nvme_set_hotplug", 00:17:14.808 "params": { 00:17:14.808 "enable": false, 00:17:14.808 "period_us": 100000 00:17:14.808 } 00:17:14.808 }, 00:17:14.808 { 00:17:14.808 "method": "bdev_wait_for_examine" 00:17:14.808 } 00:17:14.808 ] 00:17:14.808 }, 00:17:14.808 { 00:17:14.808 "subsystem": "nbd", 00:17:14.808 "config": [] 00:17:14.808 } 00:17:14.808 ] 00:17:14.808 }' 00:17:14.808 19:17:51 -- target/tls.sh@208 -- # killprocess 77499 00:17:14.808 19:17:51 -- common/autotest_common.sh@924 -- # '[' -z 77499 ']' 00:17:14.808 19:17:51 -- common/autotest_common.sh@928 -- # kill -0 77499 00:17:14.808 19:17:51 -- common/autotest_common.sh@929 -- # uname 00:17:14.808 19:17:51 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:17:14.808 19:17:51 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 77499 00:17:14.808 19:17:51 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:17:14.808 19:17:51 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:17:14.808 killing process with pid 77499 00:17:14.808 19:17:51 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 77499' 00:17:14.808 19:17:51 -- common/autotest_common.sh@943 -- # kill 77499 00:17:14.808 Received shutdown signal, test time was about 10.000000 seconds 00:17:14.808 00:17:14.808 Latency(us) 00:17:14.808 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.808 =================================================================================================================== 00:17:14.808 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:14.808 19:17:51 -- common/autotest_common.sh@948 -- # wait 77499 00:17:14.808 19:17:52 -- target/tls.sh@209 -- # killprocess 77398 00:17:14.808 19:17:52 -- common/autotest_common.sh@924 -- # '[' -z 77398 ']' 00:17:14.808 19:17:52 -- common/autotest_common.sh@928 -- # kill -0 77398 00:17:14.808 19:17:52 -- common/autotest_common.sh@929 -- # uname 00:17:14.808 19:17:52 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:17:14.808 19:17:52 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 77398 00:17:15.067 19:17:52 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:17:15.067 19:17:52 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:17:15.067 killing process with pid 77398 00:17:15.067 19:17:52 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 77398' 00:17:15.067 19:17:52 -- common/autotest_common.sh@943 -- # kill 77398 00:17:15.067 19:17:52 -- common/autotest_common.sh@948 -- # wait 77398 00:17:15.326 19:17:52 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:15.326 19:17:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:15.326 19:17:52 -- target/tls.sh@212 -- # echo '{ 00:17:15.326 "subsystems": [ 00:17:15.326 { 00:17:15.326 "subsystem": "iobuf", 00:17:15.326 "config": [ 00:17:15.326 { 00:17:15.326 "method": "iobuf_set_options", 00:17:15.326 "params": { 00:17:15.326 "large_bufsize": 135168, 00:17:15.326 "large_pool_count": 1024, 00:17:15.326 "small_bufsize": 8192, 00:17:15.326 "small_pool_count": 8192 00:17:15.326 } 00:17:15.326 } 00:17:15.326 ] 00:17:15.326 }, 00:17:15.326 { 00:17:15.326 "subsystem": "sock", 00:17:15.326 "config": [ 00:17:15.326 { 00:17:15.326 "method": "sock_impl_set_options", 00:17:15.326 "params": { 00:17:15.326 "enable_ktls": false, 00:17:15.326 "enable_placement_id": 0, 00:17:15.326 "enable_quickack": false, 00:17:15.326 "enable_recv_pipe": true, 00:17:15.326 "enable_zerocopy_send_client": false, 00:17:15.326 "enable_zerocopy_send_server": true, 00:17:15.326 "impl_name": "posix", 00:17:15.326 "recv_buf_size": 2097152, 00:17:15.326 "send_buf_size": 2097152, 00:17:15.326 "tls_version": 0, 00:17:15.326 "zerocopy_threshold": 0 00:17:15.326 } 00:17:15.326 }, 00:17:15.326 { 00:17:15.326 "method": "sock_impl_set_options", 00:17:15.326 "params": { 00:17:15.326 "enable_ktls": false, 00:17:15.326 "enable_placement_id": 0, 00:17:15.326 "enable_quickack": false, 00:17:15.326 "enable_recv_pipe": true, 00:17:15.326 "enable_zerocopy_send_client": false, 00:17:15.326 "enable_zerocopy_send_server": true, 00:17:15.326 "impl_name": "ssl", 00:17:15.326 "recv_buf_size": 4096, 00:17:15.326 "send_buf_size": 4096, 00:17:15.326 "tls_version": 0, 00:17:15.326 "zerocopy_threshold": 0 00:17:15.326 } 00:17:15.326 } 00:17:15.326 ] 00:17:15.326 }, 00:17:15.326 { 00:17:15.326 "subsystem": "vmd", 00:17:15.326 "config": [] 00:17:15.326 }, 00:17:15.326 { 00:17:15.326 "subsystem": "accel", 00:17:15.326 "config": [ 00:17:15.326 { 00:17:15.326 "method": "accel_set_options", 00:17:15.326 "params": { 00:17:15.326 "buf_count": 2048, 00:17:15.326 "large_cache_size": 16, 00:17:15.326 "sequence_count": 2048, 00:17:15.326 "small_cache_size": 128, 00:17:15.326 "task_count": 2048 00:17:15.326 } 00:17:15.326 } 00:17:15.326 ] 00:17:15.326 }, 00:17:15.326 { 00:17:15.326 "subsystem": "bdev", 00:17:15.326 "config": [ 00:17:15.326 { 00:17:15.326 "method": "bdev_set_options", 00:17:15.326 "params": { 00:17:15.326 "bdev_auto_examine": true, 00:17:15.326 "bdev_io_cache_size": 256, 00:17:15.326 "bdev_io_pool_size": 65535, 00:17:15.326 "iobuf_large_cache_size": 16, 00:17:15.326 "iobuf_small_cache_size": 128 00:17:15.326 } 00:17:15.326 }, 00:17:15.326 { 00:17:15.326 "method": "bdev_raid_set_options", 00:17:15.326 "params": { 00:17:15.326 "process_window_size_kb": 1024 00:17:15.326 } 00:17:15.326 }, 00:17:15.326 { 00:17:15.326 "method": "bdev_iscsi_set_options", 00:17:15.326 "params": { 00:17:15.326 "timeout_sec": 30 00:17:15.326 } 00:17:15.326 }, 00:17:15.326 { 00:17:15.326 "method": "bdev_nvme_set_options", 00:17:15.326 "params": { 00:17:15.326 "action_on_timeout": "none", 00:17:15.326 "allow_accel_sequence": false, 00:17:15.326 "arbitration_burst": 0, 00:17:15.326 "bdev_retry_count": 3, 00:17:15.326 "ctrlr_loss_timeout_sec": 0, 00:17:15.326 "delay_cmd_submit": true, 00:17:15.326 "disable_auto_failback": false, 00:17:15.326 "fast_io_fail_timeout_sec": 0, 00:17:15.326 "generate_uuids": false, 00:17:15.326 "high_priority_weight": 0, 00:17:15.326 "io_path_stat": false, 00:17:15.326 "io_queue_requests": 0, 00:17:15.326 "keep_alive_timeout_ms": 10000, 00:17:15.326 "low_priority_weight": 0, 00:17:15.326 "medium_priority_weight": 0, 00:17:15.326 "nvme_adminq_poll_period_us": 10000, 00:17:15.326 "nvme_error_stat": false, 00:17:15.326 "nvme_ioq_poll_period_us": 0, 00:17:15.326 "rdma_cm_event_timeout_ms": 0, 00:17:15.326 "rdma_max_cq_size": 0, 00:17:15.326 "rdma_srq_size": 0, 00:17:15.326 "reconnect_delay_sec": 0, 00:17:15.326 "timeout_admin_us": 0, 00:17:15.326 "timeout_us": 0, 00:17:15.326 "transport_ack_timeout": 0, 00:17:15.326 "transport_retry_count": 4, 00:17:15.326 "transport_tos": 0 00:17:15.326 } 00:17:15.326 }, 00:17:15.326 { 00:17:15.326 "method": "bdev_nvme_set_hotplug", 00:17:15.326 "params": { 00:17:15.326 "enable": false, 00:17:15.326 "period_us": 100000 00:17:15.326 } 00:17:15.326 }, 00:17:15.326 { 00:17:15.326 "method": "bdev_malloc_create", 00:17:15.326 "params": { 00:17:15.326 "block_size": 4096, 00:17:15.326 "name": "malloc0", 00:17:15.326 "num_blocks": 8192, 00:17:15.326 "optimal_io_boundary": 0, 00:17:15.326 "physical_block_size": 4096, 00:17:15.326 "uuid": "bd167dae-13d5-4d99-9a8d-b4fe8a9eda7 19:17:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:15.326 c" 00:17:15.326 } 00:17:15.326 }, 00:17:15.326 { 00:17:15.326 "method": "bdev_wait_for_examine" 00:17:15.326 } 00:17:15.326 ] 00:17:15.326 }, 00:17:15.326 { 00:17:15.326 "subsystem": "nbd", 00:17:15.326 "config": [] 00:17:15.326 }, 00:17:15.326 { 00:17:15.326 "subsystem": "scheduler", 00:17:15.326 "config": [ 00:17:15.326 { 00:17:15.326 "method": "framework_set_scheduler", 00:17:15.326 "params": { 00:17:15.326 "name": "static" 00:17:15.326 } 00:17:15.326 } 00:17:15.326 ] 00:17:15.326 }, 00:17:15.326 { 00:17:15.326 "subsystem": "nvmf", 00:17:15.326 "config": [ 00:17:15.326 { 00:17:15.326 "method": "nvmf_set_config", 00:17:15.326 "params": { 00:17:15.326 "admin_cmd_passthru": { 00:17:15.326 "identify_ctrlr": false 00:17:15.326 }, 00:17:15.326 "discovery_filter": "match_any" 00:17:15.326 } 00:17:15.326 }, 00:17:15.326 { 00:17:15.326 "method": "nvmf_set_max_subsystems", 00:17:15.326 "params": { 00:17:15.326 "max_subsystems": 1024 00:17:15.326 } 00:17:15.326 }, 00:17:15.326 { 00:17:15.326 "method": "nvmf_set_crdt", 00:17:15.326 "params": { 00:17:15.326 "crdt1": 0, 00:17:15.326 "crdt2": 0, 00:17:15.326 "crdt3": 0 00:17:15.326 } 00:17:15.326 }, 00:17:15.326 { 00:17:15.326 "method": "nvmf_create_transport", 00:17:15.326 "params": { 00:17:15.326 "abort_timeout_sec": 1, 00:17:15.327 "buf_cache_size": 4294967295, 00:17:15.327 "c2h_success": false, 00:17:15.327 "dif_insert_or_strip": false, 00:17:15.327 "in_capsule_data_size": 4096, 00:17:15.327 "io_unit_size": 131072, 00:17:15.327 "max_aq_depth": 128, 00:17:15.327 "max_io_qpairs_per_ctrlr": 127, 00:17:15.327 "max_io_size": 131072, 00:17:15.327 "max_queue_depth": 128, 00:17:15.327 "num_shared_buffers": 511, 00:17:15.327 "sock_priority": 0, 00:17:15.327 "trtype": "TCP", 00:17:15.327 "zcopy": false 00:17:15.327 } 00:17:15.327 }, 00:17:15.327 { 00:17:15.327 "method": "nvmf_create_subsystem", 00:17:15.327 "params": { 00:17:15.327 "allow_any_host": false, 00:17:15.327 "ana_reporting": false, 00:17:15.327 "max_cntlid": 65519, 00:17:15.327 "max_namespaces": 10, 00:17:15.327 "min_cntlid": 1, 00:17:15.327 "model_number": "SPDK bdev Controller", 00:17:15.327 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:15.327 "serial_number": "SPDK00000000000001" 00:17:15.327 } 00:17:15.327 }, 00:17:15.327 { 00:17:15.327 "method": "nvmf_subsystem_add_host", 00:17:15.327 "params": { 00:17:15.327 "host": "nqn.2016-06.io.spdk:host1", 00:17:15.327 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:15.327 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:15.327 } 00:17:15.327 }, 00:17:15.327 { 00:17:15.327 "method": "nvmf_subsystem_add_ns", 00:17:15.327 "params": { 00:17:15.327 "namespace": { 00:17:15.327 "bdev_name": "malloc0", 00:17:15.327 "nguid": "BD167DAE13D54D999A8DB4FE8A9EDA7C", 00:17:15.327 "nsid": 1, 00:17:15.327 "uuid": "bd167dae-13d5-4d99-9a8d-b4fe8a9eda7c" 00:17:15.327 }, 00:17:15.327 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:15.327 } 00:17:15.327 }, 00:17:15.327 { 00:17:15.327 "method": "nvmf_subsystem_add_listener", 00:17:15.327 "params": { 00:17:15.327 "listen_address": { 00:17:15.327 "adrfam": "IPv4", 00:17:15.327 "traddr": "10.0.0.2", 00:17:15.327 "trsvcid": "4420", 00:17:15.327 "trtype": "TCP" 00:17:15.327 }, 00:17:15.327 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:15.327 "secure_channel": true 00:17:15.327 } 00:17:15.327 } 00:17:15.327 ] 00:17:15.327 } 00:17:15.327 ] 00:17:15.327 }' 00:17:15.327 19:17:52 -- common/autotest_common.sh@10 -- # set +x 00:17:15.327 19:17:52 -- nvmf/common.sh@469 -- # nvmfpid=77572 00:17:15.327 19:17:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:15.327 19:17:52 -- nvmf/common.sh@470 -- # waitforlisten 77572 00:17:15.327 19:17:52 -- common/autotest_common.sh@817 -- # '[' -z 77572 ']' 00:17:15.327 19:17:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.327 19:17:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:15.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.327 19:17:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.327 19:17:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:15.327 19:17:52 -- common/autotest_common.sh@10 -- # set +x 00:17:15.327 [2024-02-14 19:17:52.571724] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:17:15.327 [2024-02-14 19:17:52.571846] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.327 [2024-02-14 19:17:52.710450] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.585 [2024-02-14 19:17:52.821293] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:15.585 [2024-02-14 19:17:52.821470] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:15.585 [2024-02-14 19:17:52.821483] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:15.585 [2024-02-14 19:17:52.821491] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:15.585 [2024-02-14 19:17:52.821539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.585 [2024-02-14 19:17:52.821571] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:17:15.844 [2024-02-14 19:17:53.047774] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.844 [2024-02-14 19:17:53.079741] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:15.844 [2024-02-14 19:17:53.079956] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.101 19:17:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:16.101 19:17:53 -- common/autotest_common.sh@850 -- # return 0 00:17:16.101 19:17:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:16.101 19:17:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:16.101 19:17:53 -- common/autotest_common.sh@10 -- # set +x 00:17:16.359 19:17:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.359 19:17:53 -- target/tls.sh@216 -- # bdevperf_pid=77616 00:17:16.359 19:17:53 -- target/tls.sh@217 -- # waitforlisten 77616 /var/tmp/bdevperf.sock 00:17:16.359 19:17:53 -- common/autotest_common.sh@817 -- # '[' -z 77616 ']' 00:17:16.359 19:17:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:16.359 19:17:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:16.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:16.359 19:17:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:16.359 19:17:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:16.359 19:17:53 -- common/autotest_common.sh@10 -- # set +x 00:17:16.359 19:17:53 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:16.359 19:17:53 -- target/tls.sh@213 -- # echo '{ 00:17:16.359 "subsystems": [ 00:17:16.359 { 00:17:16.359 "subsystem": "iobuf", 00:17:16.359 "config": [ 00:17:16.359 { 00:17:16.359 "method": "iobuf_set_options", 00:17:16.359 "params": { 00:17:16.359 "large_bufsize": 135168, 00:17:16.359 "large_pool_count": 1024, 00:17:16.359 "small_bufsize": 8192, 00:17:16.359 "small_pool_count": 8192 00:17:16.359 } 00:17:16.359 } 00:17:16.359 ] 00:17:16.359 }, 00:17:16.359 { 00:17:16.359 "subsystem": "sock", 00:17:16.359 "config": [ 00:17:16.359 { 00:17:16.359 "method": "sock_impl_set_options", 00:17:16.359 "params": { 00:17:16.359 "enable_ktls": false, 00:17:16.359 "enable_placement_id": 0, 00:17:16.359 "enable_quickack": false, 00:17:16.359 "enable_recv_pipe": true, 00:17:16.359 "enable_zerocopy_send_client": false, 00:17:16.359 "enable_zerocopy_send_server": true, 00:17:16.359 "impl_name": "posix", 00:17:16.359 "recv_buf_size": 2097152, 00:17:16.359 "send_buf_size": 2097152, 00:17:16.359 "tls_version": 0, 00:17:16.359 "zerocopy_threshold": 0 00:17:16.359 } 00:17:16.359 }, 00:17:16.359 { 00:17:16.359 "method": "sock_impl_set_options", 00:17:16.359 "params": { 00:17:16.359 "enable_ktls": false, 00:17:16.359 "enable_placement_id": 0, 00:17:16.359 "enable_quickack": false, 00:17:16.359 "enable_recv_pipe": true, 00:17:16.360 "enable_zerocopy_send_client": false, 00:17:16.360 "enable_zerocopy_send_server": true, 00:17:16.360 "impl_name": "ssl", 00:17:16.360 "recv_buf_size": 4096, 00:17:16.360 "send_buf_size": 4096, 00:17:16.360 "tls_version": 0, 00:17:16.360 "zerocopy_threshold": 0 00:17:16.360 } 00:17:16.360 } 00:17:16.360 ] 00:17:16.360 }, 00:17:16.360 { 00:17:16.360 "subsystem": "vmd", 00:17:16.360 "config": [] 00:17:16.360 }, 00:17:16.360 { 00:17:16.360 "subsystem": "accel", 00:17:16.360 "config": [ 00:17:16.360 { 00:17:16.360 "method": "accel_set_options", 00:17:16.360 "params": { 00:17:16.360 "buf_count": 2048, 00:17:16.360 "large_cache_size": 16, 00:17:16.360 "sequence_count": 2048, 00:17:16.360 "small_cache_size": 128, 00:17:16.360 "task_count": 2048 00:17:16.360 } 00:17:16.360 } 00:17:16.360 ] 00:17:16.360 }, 00:17:16.360 { 00:17:16.360 "subsystem": "bdev", 00:17:16.360 "config": [ 00:17:16.360 { 00:17:16.360 "method": "bdev_set_options", 00:17:16.360 "params": { 00:17:16.360 "bdev_auto_examine": true, 00:17:16.360 "bdev_io_cache_size": 256, 00:17:16.360 "bdev_io_pool_size": 65535, 00:17:16.360 "iobuf_large_cache_size": 16, 00:17:16.360 "iobuf_small_cache_size": 128 00:17:16.360 } 00:17:16.360 }, 00:17:16.360 { 00:17:16.360 "method": "bdev_raid_set_options", 00:17:16.360 "params": { 00:17:16.360 "process_window_size_kb": 1024 00:17:16.360 } 00:17:16.360 }, 00:17:16.360 { 00:17:16.360 "method": "bdev_iscsi_set_options", 00:17:16.360 "params": { 00:17:16.360 "timeout_sec": 30 00:17:16.360 } 00:17:16.360 }, 00:17:16.360 { 00:17:16.360 "method": "bdev_nvme_set_options", 00:17:16.360 "params": { 00:17:16.360 "action_on_timeout": "none", 00:17:16.360 "allow_accel_sequence": false, 00:17:16.360 "arbitration_burst": 0, 00:17:16.360 "bdev_retry_count": 3, 00:17:16.360 "ctrlr_loss_timeout_sec": 0, 00:17:16.360 "delay_cmd_submit": true, 00:17:16.360 "disable_auto_failback": false, 00:17:16.360 "fast_io_fail_timeout_sec": 0, 00:17:16.360 "generate_uuids": false, 00:17:16.360 "high_priority_weight": 0, 00:17:16.360 "io_path_stat": false, 00:17:16.360 "io_queue_requests": 512, 00:17:16.360 "keep_alive_timeout_ms": 10000, 00:17:16.360 "low_priority_weight": 0, 00:17:16.360 "medium_priority_weight": 0, 00:17:16.360 "nvme_adminq_poll_period_us": 10000, 00:17:16.360 "nvme_error_stat": false, 00:17:16.360 "nvme_ioq_poll_period_us": 0, 00:17:16.360 "rdma_cm_event_timeout_ms": 0, 00:17:16.360 "rdma_max_cq_size": 0, 00:17:16.360 "rdma_srq_size": 0, 00:17:16.360 "reconnect_delay_sec": 0, 00:17:16.360 "timeout_admin_us": 0, 00:17:16.360 "timeout_us": 0, 00:17:16.360 "transport_ack_timeout": 0, 00:17:16.360 "transport_retry_count": 4, 00:17:16.360 "transport_tos": 0 00:17:16.360 } 00:17:16.360 }, 00:17:16.360 { 00:17:16.360 "method": "bdev_nvme_attach_controller", 00:17:16.360 "params": { 00:17:16.360 "adrfam": "IPv4", 00:17:16.360 "ctrlr_loss_timeout_sec": 0, 00:17:16.360 "ddgst": false, 00:17:16.360 "fast_io_fail_timeout_sec": 0, 00:17:16.360 "hdgst": false, 00:17:16.360 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:16.360 "name": "TLSTEST", 00:17:16.360 "prchk_guard": false, 00:17:16.360 "prchk_reftag": false, 00:17:16.360 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:16.360 "reconnect_delay_sec": 0, 00:17:16.360 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:16.360 "traddr": "10.0.0.2", 00:17:16.360 "trsvcid": "4420", 00:17:16.360 "trtype": "TCP" 00:17:16.360 } 00:17:16.360 }, 00:17:16.360 { 00:17:16.360 "method": "bdev_nvme_set_hotplug", 00:17:16.360 "params": { 00:17:16.360 "enable": false, 00:17:16.360 "period_us": 100000 00:17:16.360 } 00:17:16.360 }, 00:17:16.360 { 00:17:16.360 "method": "bdev_wait_for_examine" 00:17:16.360 } 00:17:16.360 ] 00:17:16.360 }, 00:17:16.360 { 00:17:16.360 "subsystem": "nbd", 00:17:16.360 "config": [] 00:17:16.360 } 00:17:16.360 ] 00:17:16.360 }' 00:17:16.360 [2024-02-14 19:17:53.599401] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:17:16.360 [2024-02-14 19:17:53.599541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77616 ] 00:17:16.360 [2024-02-14 19:17:53.742796] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.618 [2024-02-14 19:17:53.870723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:16.618 [2024-02-14 19:17:53.870817] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:17:16.618 [2024-02-14 19:17:54.033855] bdev_nvme_rpc.c: 478:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:17.191 19:17:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:17.191 19:17:54 -- common/autotest_common.sh@850 -- # return 0 00:17:17.191 19:17:54 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:17.460 Running I/O for 10 seconds... 00:17:27.452 00:17:27.452 Latency(us) 00:17:27.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.452 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:27.452 Verification LBA range: start 0x0 length 0x2000 00:17:27.452 TLSTESTn1 : 10.01 5951.74 23.25 0.00 0.00 21475.37 2323.55 23712.12 00:17:27.452 =================================================================================================================== 00:17:27.452 Total : 5951.74 23.25 0.00 0.00 21475.37 2323.55 23712.12 00:17:27.452 0 00:17:27.452 19:18:04 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:27.452 19:18:04 -- target/tls.sh@223 -- # killprocess 77616 00:17:27.452 19:18:04 -- common/autotest_common.sh@924 -- # '[' -z 77616 ']' 00:17:27.452 19:18:04 -- common/autotest_common.sh@928 -- # kill -0 77616 00:17:27.452 19:18:04 -- common/autotest_common.sh@929 -- # uname 00:17:27.452 19:18:04 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:17:27.452 19:18:04 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 77616 00:17:27.452 19:18:04 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:17:27.452 19:18:04 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:17:27.452 killing process with pid 77616 00:17:27.452 19:18:04 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 77616' 00:17:27.452 19:18:04 -- common/autotest_common.sh@943 -- # kill 77616 00:17:27.452 Received shutdown signal, test time was about 10.000000 seconds 00:17:27.452 00:17:27.452 Latency(us) 00:17:27.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.452 =================================================================================================================== 00:17:27.452 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:27.452 19:18:04 -- common/autotest_common.sh@948 -- # wait 77616 00:17:27.452 [2024-02-14 19:18:04.689753] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:17:27.711 19:18:04 -- target/tls.sh@224 -- # killprocess 77572 00:17:27.711 19:18:04 -- common/autotest_common.sh@924 -- # '[' -z 77572 ']' 00:17:27.711 19:18:04 -- common/autotest_common.sh@928 -- # kill -0 77572 00:17:27.711 19:18:04 -- common/autotest_common.sh@929 -- # uname 00:17:27.711 19:18:04 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:17:27.711 19:18:04 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 77572 00:17:27.711 19:18:04 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:17:27.711 19:18:04 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:17:27.711 19:18:04 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 77572' 00:17:27.711 killing process with pid 77572 00:17:27.711 19:18:04 -- common/autotest_common.sh@943 -- # kill 77572 00:17:27.711 [2024-02-14 19:18:04.978978] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:17:27.711 19:18:04 -- common/autotest_common.sh@948 -- # wait 77572 00:17:27.970 19:18:05 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:17:27.970 19:18:05 -- target/tls.sh@227 -- # cleanup 00:17:27.970 19:18:05 -- target/tls.sh@15 -- # process_shm --id 0 00:17:27.970 19:18:05 -- common/autotest_common.sh@794 -- # type=--id 00:17:27.970 19:18:05 -- common/autotest_common.sh@795 -- # id=0 00:17:27.970 19:18:05 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:17:27.970 19:18:05 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:27.970 19:18:05 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:17:27.970 19:18:05 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:17:27.970 19:18:05 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:17:27.970 19:18:05 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:27.970 nvmf_trace.0 00:17:27.970 19:18:05 -- common/autotest_common.sh@809 -- # return 0 00:17:27.970 19:18:05 -- target/tls.sh@16 -- # killprocess 77616 00:17:27.970 19:18:05 -- common/autotest_common.sh@924 -- # '[' -z 77616 ']' 00:17:27.970 19:18:05 -- common/autotest_common.sh@928 -- # kill -0 77616 00:17:27.970 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 928: kill: (77616) - No such process 00:17:27.970 Process with pid 77616 is not found 00:17:27.970 19:18:05 -- common/autotest_common.sh@951 -- # echo 'Process with pid 77616 is not found' 00:17:27.970 19:18:05 -- target/tls.sh@17 -- # nvmftestfini 00:17:27.970 19:18:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:27.970 19:18:05 -- nvmf/common.sh@116 -- # sync 00:17:27.970 19:18:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:27.970 19:18:05 -- nvmf/common.sh@119 -- # set +e 00:17:27.970 19:18:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:27.970 19:18:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:27.970 rmmod nvme_tcp 00:17:28.230 rmmod nvme_fabrics 00:17:28.230 rmmod nvme_keyring 00:17:28.230 19:18:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:28.230 19:18:05 -- nvmf/common.sh@123 -- # set -e 00:17:28.230 19:18:05 -- nvmf/common.sh@124 -- # return 0 00:17:28.230 19:18:05 -- nvmf/common.sh@477 -- # '[' -n 77572 ']' 00:17:28.230 19:18:05 -- nvmf/common.sh@478 -- # killprocess 77572 00:17:28.230 19:18:05 -- common/autotest_common.sh@924 -- # '[' -z 77572 ']' 00:17:28.230 19:18:05 -- common/autotest_common.sh@928 -- # kill -0 77572 00:17:28.230 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 928: kill: (77572) - No such process 00:17:28.230 Process with pid 77572 is not found 00:17:28.230 19:18:05 -- common/autotest_common.sh@951 -- # echo 'Process with pid 77572 is not found' 00:17:28.230 19:18:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:28.230 19:18:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:28.230 19:18:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:28.230 19:18:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:28.230 19:18:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:28.230 19:18:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.230 19:18:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.230 19:18:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.230 19:18:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:28.230 19:18:05 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:28.230 00:17:28.230 real 1m13.033s 00:17:28.230 user 1m51.240s 00:17:28.230 sys 0m26.070s 00:17:28.230 19:18:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:28.230 19:18:05 -- common/autotest_common.sh@10 -- # set +x 00:17:28.230 ************************************ 00:17:28.230 END TEST nvmf_tls 00:17:28.230 ************************************ 00:17:28.230 19:18:05 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:28.230 19:18:05 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:17:28.230 19:18:05 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:17:28.230 19:18:05 -- common/autotest_common.sh@10 -- # set +x 00:17:28.230 ************************************ 00:17:28.230 START TEST nvmf_fips 00:17:28.230 ************************************ 00:17:28.230 19:18:05 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:28.230 * Looking for test storage... 00:17:28.230 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:28.230 19:18:05 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:28.230 19:18:05 -- nvmf/common.sh@7 -- # uname -s 00:17:28.230 19:18:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.230 19:18:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.230 19:18:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.230 19:18:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.230 19:18:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.230 19:18:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.230 19:18:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.230 19:18:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.230 19:18:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.230 19:18:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.230 19:18:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:17:28.230 19:18:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:17:28.230 19:18:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.230 19:18:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.230 19:18:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:28.230 19:18:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:28.230 19:18:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.230 19:18:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.230 19:18:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.230 19:18:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.230 19:18:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.230 19:18:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.230 19:18:05 -- paths/export.sh@5 -- # export PATH 00:17:28.230 19:18:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.230 19:18:05 -- nvmf/common.sh@46 -- # : 0 00:17:28.230 19:18:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:28.230 19:18:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:28.230 19:18:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:28.230 19:18:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.230 19:18:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.230 19:18:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:28.230 19:18:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:28.230 19:18:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:28.230 19:18:05 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:28.230 19:18:05 -- fips/fips.sh@89 -- # check_openssl_version 00:17:28.230 19:18:05 -- fips/fips.sh@83 -- # local target=3.0.0 00:17:28.230 19:18:05 -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:28.230 19:18:05 -- fips/fips.sh@85 -- # openssl version 00:17:28.230 19:18:05 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:17:28.230 19:18:05 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:17:28.230 19:18:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:28.230 19:18:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:28.230 19:18:05 -- scripts/common.sh@335 -- # IFS=.-: 00:17:28.230 19:18:05 -- scripts/common.sh@335 -- # read -ra ver1 00:17:28.230 19:18:05 -- scripts/common.sh@336 -- # IFS=.-: 00:17:28.230 19:18:05 -- scripts/common.sh@336 -- # read -ra ver2 00:17:28.231 19:18:05 -- scripts/common.sh@337 -- # local 'op=>=' 00:17:28.231 19:18:05 -- scripts/common.sh@339 -- # ver1_l=3 00:17:28.231 19:18:05 -- scripts/common.sh@340 -- # ver2_l=3 00:17:28.231 19:18:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:28.231 19:18:05 -- scripts/common.sh@343 -- # case "$op" in 00:17:28.231 19:18:05 -- scripts/common.sh@347 -- # : 1 00:17:28.231 19:18:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:28.231 19:18:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:28.231 19:18:05 -- scripts/common.sh@364 -- # decimal 3 00:17:28.231 19:18:05 -- scripts/common.sh@352 -- # local d=3 00:17:28.231 19:18:05 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:28.231 19:18:05 -- scripts/common.sh@354 -- # echo 3 00:17:28.231 19:18:05 -- scripts/common.sh@364 -- # ver1[v]=3 00:17:28.231 19:18:05 -- scripts/common.sh@365 -- # decimal 3 00:17:28.231 19:18:05 -- scripts/common.sh@352 -- # local d=3 00:17:28.231 19:18:05 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:28.231 19:18:05 -- scripts/common.sh@354 -- # echo 3 00:17:28.231 19:18:05 -- scripts/common.sh@365 -- # ver2[v]=3 00:17:28.231 19:18:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:28.231 19:18:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:28.231 19:18:05 -- scripts/common.sh@363 -- # (( v++ )) 00:17:28.231 19:18:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:28.489 19:18:05 -- scripts/common.sh@364 -- # decimal 0 00:17:28.489 19:18:05 -- scripts/common.sh@352 -- # local d=0 00:17:28.489 19:18:05 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:28.489 19:18:05 -- scripts/common.sh@354 -- # echo 0 00:17:28.489 19:18:05 -- scripts/common.sh@364 -- # ver1[v]=0 00:17:28.490 19:18:05 -- scripts/common.sh@365 -- # decimal 0 00:17:28.490 19:18:05 -- scripts/common.sh@352 -- # local d=0 00:17:28.490 19:18:05 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:28.490 19:18:05 -- scripts/common.sh@354 -- # echo 0 00:17:28.490 19:18:05 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:28.490 19:18:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:28.490 19:18:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:28.490 19:18:05 -- scripts/common.sh@363 -- # (( v++ )) 00:17:28.490 19:18:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:28.490 19:18:05 -- scripts/common.sh@364 -- # decimal 9 00:17:28.490 19:18:05 -- scripts/common.sh@352 -- # local d=9 00:17:28.490 19:18:05 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:17:28.490 19:18:05 -- scripts/common.sh@354 -- # echo 9 00:17:28.490 19:18:05 -- scripts/common.sh@364 -- # ver1[v]=9 00:17:28.490 19:18:05 -- scripts/common.sh@365 -- # decimal 0 00:17:28.490 19:18:05 -- scripts/common.sh@352 -- # local d=0 00:17:28.490 19:18:05 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:28.490 19:18:05 -- scripts/common.sh@354 -- # echo 0 00:17:28.490 19:18:05 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:28.490 19:18:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:28.490 19:18:05 -- scripts/common.sh@366 -- # return 0 00:17:28.490 19:18:05 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:28.490 19:18:05 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:28.490 19:18:05 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:28.490 19:18:05 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:28.490 19:18:05 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:28.490 19:18:05 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:28.490 19:18:05 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:28.490 19:18:05 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:17:28.490 19:18:05 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:17:28.490 19:18:05 -- fips/fips.sh@114 -- # build_openssl_config 00:17:28.490 19:18:05 -- fips/fips.sh@37 -- # cat 00:17:28.490 19:18:05 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:28.490 19:18:05 -- fips/fips.sh@58 -- # cat - 00:17:28.490 19:18:05 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:28.490 19:18:05 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:17:28.490 19:18:05 -- fips/fips.sh@117 -- # mapfile -t providers 00:17:28.490 19:18:05 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:17:28.490 19:18:05 -- fips/fips.sh@117 -- # grep name 00:17:28.490 19:18:05 -- fips/fips.sh@117 -- # openssl list -providers 00:17:28.490 19:18:05 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:17:28.490 19:18:05 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:17:28.490 19:18:05 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:28.490 19:18:05 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:17:28.490 19:18:05 -- fips/fips.sh@128 -- # : 00:17:28.490 19:18:05 -- common/autotest_common.sh@638 -- # local es=0 00:17:28.490 19:18:05 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:28.490 19:18:05 -- common/autotest_common.sh@626 -- # local arg=openssl 00:17:28.490 19:18:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:28.490 19:18:05 -- common/autotest_common.sh@630 -- # type -t openssl 00:17:28.490 19:18:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:28.490 19:18:05 -- common/autotest_common.sh@632 -- # type -P openssl 00:17:28.490 19:18:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:28.490 19:18:05 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:17:28.490 19:18:05 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:17:28.490 19:18:05 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:17:28.490 Error setting digest 00:17:28.490 009219A0977F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:17:28.490 009219A0977F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:17:28.490 19:18:05 -- common/autotest_common.sh@641 -- # es=1 00:17:28.490 19:18:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:28.490 19:18:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:28.490 19:18:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:28.490 19:18:05 -- fips/fips.sh@131 -- # nvmftestinit 00:17:28.490 19:18:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:28.490 19:18:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.490 19:18:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:28.490 19:18:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:28.490 19:18:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:28.490 19:18:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.490 19:18:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.490 19:18:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.490 19:18:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:28.490 19:18:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:28.490 19:18:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:28.490 19:18:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:28.490 19:18:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:28.490 19:18:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:28.490 19:18:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:28.490 19:18:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:28.490 19:18:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:28.490 19:18:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:28.490 19:18:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:28.490 19:18:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:28.490 19:18:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:28.490 19:18:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:28.490 19:18:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:28.490 19:18:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:28.490 19:18:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:28.490 19:18:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:28.490 19:18:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:28.490 19:18:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:28.490 Cannot find device "nvmf_tgt_br" 00:17:28.490 19:18:05 -- nvmf/common.sh@154 -- # true 00:17:28.490 19:18:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:28.490 Cannot find device "nvmf_tgt_br2" 00:17:28.490 19:18:05 -- nvmf/common.sh@155 -- # true 00:17:28.491 19:18:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:28.491 19:18:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:28.491 Cannot find device "nvmf_tgt_br" 00:17:28.491 19:18:05 -- nvmf/common.sh@157 -- # true 00:17:28.491 19:18:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:28.491 Cannot find device "nvmf_tgt_br2" 00:17:28.491 19:18:05 -- nvmf/common.sh@158 -- # true 00:17:28.491 19:18:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:28.749 19:18:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:28.749 19:18:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:28.749 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:28.749 19:18:05 -- nvmf/common.sh@161 -- # true 00:17:28.749 19:18:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:28.749 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:28.749 19:18:05 -- nvmf/common.sh@162 -- # true 00:17:28.749 19:18:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:28.749 19:18:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:28.749 19:18:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:28.749 19:18:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:28.749 19:18:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:28.749 19:18:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:28.749 19:18:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:28.749 19:18:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:28.749 19:18:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:28.749 19:18:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:28.749 19:18:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:28.749 19:18:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:28.749 19:18:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:28.749 19:18:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:28.749 19:18:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:28.749 19:18:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:28.749 19:18:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:28.749 19:18:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:28.749 19:18:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:28.749 19:18:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:28.749 19:18:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:28.749 19:18:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:28.749 19:18:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:28.749 19:18:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:28.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:28.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:17:28.749 00:17:28.749 --- 10.0.0.2 ping statistics --- 00:17:28.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.749 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:17:28.749 19:18:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:28.749 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:28.749 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:17:28.749 00:17:28.749 --- 10.0.0.3 ping statistics --- 00:17:28.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.750 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:28.750 19:18:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:29.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:29.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:17:29.008 00:17:29.008 --- 10.0.0.1 ping statistics --- 00:17:29.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.008 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:17:29.008 19:18:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:29.008 19:18:06 -- nvmf/common.sh@421 -- # return 0 00:17:29.008 19:18:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:29.008 19:18:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:29.008 19:18:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:29.008 19:18:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:29.008 19:18:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:29.008 19:18:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:29.008 19:18:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:29.008 19:18:06 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:17:29.008 19:18:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:29.008 19:18:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:29.008 19:18:06 -- common/autotest_common.sh@10 -- # set +x 00:17:29.008 19:18:06 -- nvmf/common.sh@469 -- # nvmfpid=77977 00:17:29.008 19:18:06 -- nvmf/common.sh@470 -- # waitforlisten 77977 00:17:29.008 19:18:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:29.008 19:18:06 -- common/autotest_common.sh@817 -- # '[' -z 77977 ']' 00:17:29.008 19:18:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.008 19:18:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:29.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.008 19:18:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.008 19:18:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:29.008 19:18:06 -- common/autotest_common.sh@10 -- # set +x 00:17:29.008 [2024-02-14 19:18:06.288185] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:17:29.008 [2024-02-14 19:18:06.288299] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.267 [2024-02-14 19:18:06.426173] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.267 [2024-02-14 19:18:06.559378] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:29.267 [2024-02-14 19:18:06.559565] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:29.267 [2024-02-14 19:18:06.559583] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:29.267 [2024-02-14 19:18:06.559594] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:29.267 [2024-02-14 19:18:06.559630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.834 19:18:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:29.834 19:18:07 -- common/autotest_common.sh@850 -- # return 0 00:17:29.834 19:18:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:29.834 19:18:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:29.834 19:18:07 -- common/autotest_common.sh@10 -- # set +x 00:17:30.105 19:18:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.105 19:18:07 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:17:30.105 19:18:07 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:30.105 19:18:07 -- fips/fips.sh@138 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:30.105 19:18:07 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:30.105 19:18:07 -- fips/fips.sh@140 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:30.105 19:18:07 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:30.105 19:18:07 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:30.105 19:18:07 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:30.105 [2024-02-14 19:18:07.520049] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.366 [2024-02-14 19:18:07.536002] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:30.366 [2024-02-14 19:18:07.536235] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:30.366 malloc0 00:17:30.366 19:18:07 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:30.366 19:18:07 -- fips/fips.sh@148 -- # bdevperf_pid=78029 00:17:30.366 19:18:07 -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:30.366 19:18:07 -- fips/fips.sh@149 -- # waitforlisten 78029 /var/tmp/bdevperf.sock 00:17:30.366 19:18:07 -- common/autotest_common.sh@817 -- # '[' -z 78029 ']' 00:17:30.366 19:18:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:30.366 19:18:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:30.366 19:18:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:30.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:30.366 19:18:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:30.366 19:18:07 -- common/autotest_common.sh@10 -- # set +x 00:17:30.366 [2024-02-14 19:18:07.679362] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:17:30.366 [2024-02-14 19:18:07.679475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78029 ] 00:17:30.623 [2024-02-14 19:18:07.820361] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.623 [2024-02-14 19:18:07.954482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.189 19:18:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:31.189 19:18:08 -- common/autotest_common.sh@850 -- # return 0 00:17:31.189 19:18:08 -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:31.448 [2024-02-14 19:18:08.858801] bdev_nvme_rpc.c: 478:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:31.706 TLSTESTn1 00:17:31.706 19:18:08 -- fips/fips.sh@155 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:31.706 Running I/O for 10 seconds... 00:17:41.700 00:17:41.700 Latency(us) 00:17:41.700 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.700 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:41.700 Verification LBA range: start 0x0 length 0x2000 00:17:41.700 TLSTESTn1 : 10.02 5541.24 21.65 0.00 0.00 23059.64 5272.67 23950.43 00:17:41.700 =================================================================================================================== 00:17:41.700 Total : 5541.24 21.65 0.00 0.00 23059.64 5272.67 23950.43 00:17:41.700 0 00:17:41.700 19:18:19 -- fips/fips.sh@1 -- # cleanup 00:17:41.700 19:18:19 -- fips/fips.sh@15 -- # process_shm --id 0 00:17:41.700 19:18:19 -- common/autotest_common.sh@794 -- # type=--id 00:17:41.700 19:18:19 -- common/autotest_common.sh@795 -- # id=0 00:17:41.700 19:18:19 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:17:41.700 19:18:19 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:41.958 19:18:19 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:17:41.958 19:18:19 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:17:41.958 19:18:19 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:17:41.958 19:18:19 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:41.958 nvmf_trace.0 00:17:41.958 19:18:19 -- common/autotest_common.sh@809 -- # return 0 00:17:41.958 19:18:19 -- fips/fips.sh@16 -- # killprocess 78029 00:17:41.958 19:18:19 -- common/autotest_common.sh@924 -- # '[' -z 78029 ']' 00:17:41.958 19:18:19 -- common/autotest_common.sh@928 -- # kill -0 78029 00:17:41.958 19:18:19 -- common/autotest_common.sh@929 -- # uname 00:17:41.958 19:18:19 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:17:41.959 19:18:19 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 78029 00:17:41.959 19:18:19 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:17:41.959 19:18:19 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:17:41.959 killing process with pid 78029 00:17:41.959 19:18:19 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 78029' 00:17:41.959 19:18:19 -- common/autotest_common.sh@943 -- # kill 78029 00:17:41.959 Received shutdown signal, test time was about 10.000000 seconds 00:17:41.959 00:17:41.959 Latency(us) 00:17:41.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.959 =================================================================================================================== 00:17:41.959 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:41.959 19:18:19 -- common/autotest_common.sh@948 -- # wait 78029 00:17:42.217 19:18:19 -- fips/fips.sh@17 -- # nvmftestfini 00:17:42.217 19:18:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:42.217 19:18:19 -- nvmf/common.sh@116 -- # sync 00:17:42.217 19:18:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:42.217 19:18:19 -- nvmf/common.sh@119 -- # set +e 00:17:42.217 19:18:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:42.217 19:18:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:42.217 rmmod nvme_tcp 00:17:42.217 rmmod nvme_fabrics 00:17:42.217 rmmod nvme_keyring 00:17:42.217 19:18:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:42.217 19:18:19 -- nvmf/common.sh@123 -- # set -e 00:17:42.217 19:18:19 -- nvmf/common.sh@124 -- # return 0 00:17:42.218 19:18:19 -- nvmf/common.sh@477 -- # '[' -n 77977 ']' 00:17:42.218 19:18:19 -- nvmf/common.sh@478 -- # killprocess 77977 00:17:42.218 19:18:19 -- common/autotest_common.sh@924 -- # '[' -z 77977 ']' 00:17:42.218 19:18:19 -- common/autotest_common.sh@928 -- # kill -0 77977 00:17:42.218 19:18:19 -- common/autotest_common.sh@929 -- # uname 00:17:42.218 19:18:19 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:17:42.218 19:18:19 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 77977 00:17:42.218 19:18:19 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:17:42.218 19:18:19 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:17:42.218 19:18:19 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 77977' 00:17:42.218 killing process with pid 77977 00:17:42.218 19:18:19 -- common/autotest_common.sh@943 -- # kill 77977 00:17:42.218 19:18:19 -- common/autotest_common.sh@948 -- # wait 77977 00:17:42.476 19:18:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:42.476 19:18:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:42.476 19:18:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:42.476 19:18:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:42.476 19:18:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:42.476 19:18:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.477 19:18:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.477 19:18:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.736 19:18:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:42.736 19:18:19 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:42.736 00:17:42.736 real 0m14.379s 00:17:42.736 user 0m19.178s 00:17:42.736 sys 0m5.972s 00:17:42.736 19:18:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:42.736 ************************************ 00:17:42.736 END TEST nvmf_fips 00:17:42.736 19:18:19 -- common/autotest_common.sh@10 -- # set +x 00:17:42.736 ************************************ 00:17:42.736 19:18:19 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:17:42.736 19:18:19 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:42.736 19:18:19 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:17:42.736 19:18:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:17:42.736 19:18:19 -- common/autotest_common.sh@10 -- # set +x 00:17:42.736 ************************************ 00:17:42.736 START TEST nvmf_fuzz 00:17:42.736 ************************************ 00:17:42.736 19:18:19 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:42.736 * Looking for test storage... 00:17:42.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:42.736 19:18:20 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:42.736 19:18:20 -- nvmf/common.sh@7 -- # uname -s 00:17:42.736 19:18:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:42.736 19:18:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:42.736 19:18:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:42.736 19:18:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:42.736 19:18:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:42.736 19:18:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:42.736 19:18:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:42.736 19:18:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:42.736 19:18:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:42.736 19:18:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:42.736 19:18:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:17:42.736 19:18:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:17:42.736 19:18:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:42.736 19:18:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:42.736 19:18:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:42.736 19:18:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:42.736 19:18:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:42.736 19:18:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:42.736 19:18:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.736 19:18:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.736 19:18:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.736 19:18:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.736 19:18:20 -- paths/export.sh@5 -- # export PATH 00:17:42.736 19:18:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.736 19:18:20 -- nvmf/common.sh@46 -- # : 0 00:17:42.736 19:18:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:42.736 19:18:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:42.736 19:18:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:42.736 19:18:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:42.736 19:18:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:42.736 19:18:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:42.736 19:18:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:42.736 19:18:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:42.736 19:18:20 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:17:42.736 19:18:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:42.736 19:18:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:42.736 19:18:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:42.736 19:18:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:42.736 19:18:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:42.736 19:18:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.736 19:18:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.736 19:18:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.736 19:18:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:42.736 19:18:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:42.736 19:18:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:42.736 19:18:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:42.736 19:18:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:42.736 19:18:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:42.736 19:18:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:42.736 19:18:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:42.736 19:18:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:42.736 19:18:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:42.736 19:18:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:42.736 19:18:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:42.736 19:18:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:42.736 19:18:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:42.736 19:18:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:42.737 19:18:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:42.737 19:18:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:42.737 19:18:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:42.737 19:18:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:42.737 19:18:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:42.737 Cannot find device "nvmf_tgt_br" 00:17:42.737 19:18:20 -- nvmf/common.sh@154 -- # true 00:17:42.737 19:18:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:42.737 Cannot find device "nvmf_tgt_br2" 00:17:42.737 19:18:20 -- nvmf/common.sh@155 -- # true 00:17:42.737 19:18:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:42.737 19:18:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:42.737 Cannot find device "nvmf_tgt_br" 00:17:42.737 19:18:20 -- nvmf/common.sh@157 -- # true 00:17:42.737 19:18:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:42.737 Cannot find device "nvmf_tgt_br2" 00:17:42.737 19:18:20 -- nvmf/common.sh@158 -- # true 00:17:42.737 19:18:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:42.996 19:18:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:42.996 19:18:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:42.996 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:42.996 19:18:20 -- nvmf/common.sh@161 -- # true 00:17:42.996 19:18:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:42.996 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:42.996 19:18:20 -- nvmf/common.sh@162 -- # true 00:17:42.996 19:18:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:42.996 19:18:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:42.996 19:18:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:42.996 19:18:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:42.996 19:18:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:42.996 19:18:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:42.996 19:18:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:42.996 19:18:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:42.996 19:18:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:42.996 19:18:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:42.996 19:18:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:42.996 19:18:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:42.996 19:18:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:42.996 19:18:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:42.996 19:18:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:42.996 19:18:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:42.996 19:18:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:42.996 19:18:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:42.996 19:18:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:42.996 19:18:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:42.996 19:18:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:42.996 19:18:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:42.996 19:18:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:42.996 19:18:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:42.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:42.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:17:42.996 00:17:42.996 --- 10.0.0.2 ping statistics --- 00:17:42.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.996 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:17:42.996 19:18:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:42.996 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:42.996 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:17:42.996 00:17:42.996 --- 10.0.0.3 ping statistics --- 00:17:42.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.996 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:42.996 19:18:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:42.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:42.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:42.996 00:17:42.996 --- 10.0.0.1 ping statistics --- 00:17:42.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.996 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:42.996 19:18:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:42.996 19:18:20 -- nvmf/common.sh@421 -- # return 0 00:17:42.996 19:18:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:42.996 19:18:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:42.996 19:18:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:42.996 19:18:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:42.996 19:18:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:42.996 19:18:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:42.996 19:18:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:42.996 19:18:20 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=78379 00:17:42.996 19:18:20 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:42.996 19:18:20 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:42.996 19:18:20 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 78379 00:17:42.996 19:18:20 -- common/autotest_common.sh@817 -- # '[' -z 78379 ']' 00:17:42.996 19:18:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.996 19:18:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:42.996 19:18:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.996 19:18:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:42.996 19:18:20 -- common/autotest_common.sh@10 -- # set +x 00:17:44.374 19:18:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:44.374 19:18:21 -- common/autotest_common.sh@850 -- # return 0 00:17:44.374 19:18:21 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:44.374 19:18:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:44.374 19:18:21 -- common/autotest_common.sh@10 -- # set +x 00:17:44.374 19:18:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:44.374 19:18:21 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:17:44.374 19:18:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:44.374 19:18:21 -- common/autotest_common.sh@10 -- # set +x 00:17:44.374 Malloc0 00:17:44.374 19:18:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:44.374 19:18:21 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:44.374 19:18:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:44.374 19:18:21 -- common/autotest_common.sh@10 -- # set +x 00:17:44.374 19:18:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:44.374 19:18:21 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:44.374 19:18:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:44.374 19:18:21 -- common/autotest_common.sh@10 -- # set +x 00:17:44.374 19:18:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:44.375 19:18:21 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:44.375 19:18:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:44.375 19:18:21 -- common/autotest_common.sh@10 -- # set +x 00:17:44.375 19:18:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:44.375 19:18:21 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:17:44.375 19:18:21 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:17:44.634 Shutting down the fuzz application 00:17:44.634 19:18:21 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:17:45.202 Shutting down the fuzz application 00:17:45.202 19:18:22 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:45.202 19:18:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:45.202 19:18:22 -- common/autotest_common.sh@10 -- # set +x 00:17:45.202 19:18:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:45.202 19:18:22 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:17:45.202 19:18:22 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:17:45.202 19:18:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:45.202 19:18:22 -- nvmf/common.sh@116 -- # sync 00:17:45.202 19:18:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:45.202 19:18:22 -- nvmf/common.sh@119 -- # set +e 00:17:45.202 19:18:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:45.202 19:18:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:45.202 rmmod nvme_tcp 00:17:45.202 rmmod nvme_fabrics 00:17:45.202 rmmod nvme_keyring 00:17:45.202 19:18:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:45.202 19:18:22 -- nvmf/common.sh@123 -- # set -e 00:17:45.202 19:18:22 -- nvmf/common.sh@124 -- # return 0 00:17:45.202 19:18:22 -- nvmf/common.sh@477 -- # '[' -n 78379 ']' 00:17:45.202 19:18:22 -- nvmf/common.sh@478 -- # killprocess 78379 00:17:45.202 19:18:22 -- common/autotest_common.sh@924 -- # '[' -z 78379 ']' 00:17:45.202 19:18:22 -- common/autotest_common.sh@928 -- # kill -0 78379 00:17:45.202 19:18:22 -- common/autotest_common.sh@929 -- # uname 00:17:45.202 19:18:22 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:17:45.202 19:18:22 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 78379 00:17:45.202 killing process with pid 78379 00:17:45.202 19:18:22 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:17:45.202 19:18:22 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:17:45.202 19:18:22 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 78379' 00:17:45.202 19:18:22 -- common/autotest_common.sh@943 -- # kill 78379 00:17:45.202 19:18:22 -- common/autotest_common.sh@948 -- # wait 78379 00:17:45.461 19:18:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:45.461 19:18:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:45.461 19:18:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:45.461 19:18:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:45.461 19:18:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:45.461 19:18:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.461 19:18:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:45.461 19:18:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.461 19:18:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:45.461 19:18:22 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:17:45.461 00:17:45.461 real 0m2.900s 00:17:45.461 user 0m3.237s 00:17:45.461 sys 0m0.649s 00:17:45.461 19:18:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:45.461 19:18:22 -- common/autotest_common.sh@10 -- # set +x 00:17:45.461 ************************************ 00:17:45.461 END TEST nvmf_fuzz 00:17:45.461 ************************************ 00:17:45.721 19:18:22 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:17:45.721 19:18:22 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:17:45.721 19:18:22 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:17:45.721 19:18:22 -- common/autotest_common.sh@10 -- # set +x 00:17:45.721 ************************************ 00:17:45.721 START TEST nvmf_multiconnection 00:17:45.721 ************************************ 00:17:45.721 19:18:22 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:17:45.721 * Looking for test storage... 00:17:45.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:45.721 19:18:22 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:45.721 19:18:22 -- nvmf/common.sh@7 -- # uname -s 00:17:45.721 19:18:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.721 19:18:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.721 19:18:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.721 19:18:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.721 19:18:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:45.721 19:18:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:45.721 19:18:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.721 19:18:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:45.721 19:18:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.721 19:18:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:45.721 19:18:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:17:45.721 19:18:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:17:45.721 19:18:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.721 19:18:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:45.721 19:18:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:45.721 19:18:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:45.721 19:18:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.721 19:18:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.721 19:18:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.721 19:18:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.721 19:18:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.721 19:18:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.721 19:18:22 -- paths/export.sh@5 -- # export PATH 00:17:45.721 19:18:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.721 19:18:22 -- nvmf/common.sh@46 -- # : 0 00:17:45.721 19:18:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:45.721 19:18:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:45.721 19:18:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:45.721 19:18:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.721 19:18:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.721 19:18:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:45.721 19:18:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:45.721 19:18:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:45.721 19:18:22 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:45.721 19:18:22 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:45.721 19:18:22 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:17:45.721 19:18:22 -- target/multiconnection.sh@16 -- # nvmftestinit 00:17:45.721 19:18:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:45.721 19:18:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:45.721 19:18:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:45.721 19:18:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:45.721 19:18:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:45.721 19:18:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.721 19:18:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:45.721 19:18:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.721 19:18:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:45.721 19:18:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:45.721 19:18:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:45.721 19:18:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:45.721 19:18:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:45.721 19:18:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:45.721 19:18:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:45.721 19:18:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:45.721 19:18:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:45.721 19:18:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:45.721 19:18:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:45.721 19:18:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:45.721 19:18:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:45.721 19:18:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:45.721 19:18:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:45.721 19:18:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:45.721 19:18:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:45.721 19:18:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:45.721 19:18:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:45.721 19:18:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:45.721 Cannot find device "nvmf_tgt_br" 00:17:45.721 19:18:23 -- nvmf/common.sh@154 -- # true 00:17:45.721 19:18:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:45.721 Cannot find device "nvmf_tgt_br2" 00:17:45.721 19:18:23 -- nvmf/common.sh@155 -- # true 00:17:45.722 19:18:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:45.722 19:18:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:45.722 Cannot find device "nvmf_tgt_br" 00:17:45.722 19:18:23 -- nvmf/common.sh@157 -- # true 00:17:45.722 19:18:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:45.722 Cannot find device "nvmf_tgt_br2" 00:17:45.722 19:18:23 -- nvmf/common.sh@158 -- # true 00:17:45.722 19:18:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:45.722 19:18:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:45.980 19:18:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:45.981 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:45.981 19:18:23 -- nvmf/common.sh@161 -- # true 00:17:45.981 19:18:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:45.981 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:45.981 19:18:23 -- nvmf/common.sh@162 -- # true 00:17:45.981 19:18:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:45.981 19:18:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:45.981 19:18:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:45.981 19:18:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:45.981 19:18:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:45.981 19:18:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:45.981 19:18:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:45.981 19:18:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:45.981 19:18:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:45.981 19:18:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:45.981 19:18:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:45.981 19:18:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:45.981 19:18:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:45.981 19:18:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:45.981 19:18:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:45.981 19:18:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:45.981 19:18:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:45.981 19:18:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:45.981 19:18:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:45.981 19:18:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:45.981 19:18:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:45.981 19:18:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:45.981 19:18:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:45.981 19:18:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:45.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:45.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:17:45.981 00:17:45.981 --- 10.0.0.2 ping statistics --- 00:17:45.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.981 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:45.981 19:18:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:45.981 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:45.981 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:17:45.981 00:17:45.981 --- 10.0.0.3 ping statistics --- 00:17:45.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.981 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:17:45.981 19:18:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:45.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:45.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:17:45.981 00:17:45.981 --- 10.0.0.1 ping statistics --- 00:17:45.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.981 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:17:45.981 19:18:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:45.981 19:18:23 -- nvmf/common.sh@421 -- # return 0 00:17:45.981 19:18:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:45.981 19:18:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:45.981 19:18:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:45.981 19:18:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:45.981 19:18:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:45.981 19:18:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:45.981 19:18:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:45.981 19:18:23 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:17:45.981 19:18:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:45.981 19:18:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:45.981 19:18:23 -- common/autotest_common.sh@10 -- # set +x 00:17:45.981 19:18:23 -- nvmf/common.sh@469 -- # nvmfpid=78590 00:17:45.981 19:18:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:45.981 19:18:23 -- nvmf/common.sh@470 -- # waitforlisten 78590 00:17:45.981 19:18:23 -- common/autotest_common.sh@817 -- # '[' -z 78590 ']' 00:17:45.981 19:18:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.981 19:18:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:45.981 19:18:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.981 19:18:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:45.981 19:18:23 -- common/autotest_common.sh@10 -- # set +x 00:17:46.240 [2024-02-14 19:18:23.438297] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:17:46.240 [2024-02-14 19:18:23.438731] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.240 [2024-02-14 19:18:23.579754] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:46.499 [2024-02-14 19:18:23.722722] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:46.499 [2024-02-14 19:18:23.723203] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.499 [2024-02-14 19:18:23.723349] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.499 [2024-02-14 19:18:23.723469] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.499 [2024-02-14 19:18:23.723622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.499 [2024-02-14 19:18:23.724002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:46.499 [2024-02-14 19:18:23.724009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.499 [2024-02-14 19:18:23.723828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.067 19:18:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:47.067 19:18:24 -- common/autotest_common.sh@850 -- # return 0 00:17:47.067 19:18:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:47.067 19:18:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:47.067 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.067 19:18:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.067 19:18:24 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:47.067 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.067 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.326 [2024-02-14 19:18:24.486974] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:47.326 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.326 19:18:24 -- target/multiconnection.sh@21 -- # seq 1 11 00:17:47.326 19:18:24 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:47.326 19:18:24 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:47.326 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.326 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.326 Malloc1 00:17:47.326 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.326 19:18:24 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:17:47.326 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.326 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.326 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.326 19:18:24 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:47.326 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.326 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.326 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.326 19:18:24 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:47.326 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.326 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.326 [2024-02-14 19:18:24.576230] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.326 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.326 19:18:24 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:47.326 19:18:24 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:17:47.326 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.326 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.326 Malloc2 00:17:47.326 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.326 19:18:24 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:47.326 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.326 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.326 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.326 19:18:24 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:17:47.326 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.326 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.326 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.326 19:18:24 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:47.326 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.326 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.326 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.326 19:18:24 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:47.326 19:18:24 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:17:47.326 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.326 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.326 Malloc3 00:17:47.326 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.326 19:18:24 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:17:47.326 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.326 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.326 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.326 19:18:24 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:17:47.326 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.326 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.326 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.326 19:18:24 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:17:47.326 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.326 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.326 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.326 19:18:24 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:47.326 19:18:24 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:17:47.326 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.326 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.326 Malloc4 00:17:47.326 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.326 19:18:24 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:17:47.326 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.326 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.326 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.326 19:18:24 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:17:47.326 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.326 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.326 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.326 19:18:24 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:17:47.326 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.326 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.585 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.585 19:18:24 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:47.585 19:18:24 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:17:47.585 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.585 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.585 Malloc5 00:17:47.585 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.585 19:18:24 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:17:47.585 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.585 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.585 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.585 19:18:24 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:17:47.585 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.585 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.585 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.585 19:18:24 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:17:47.585 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.585 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.585 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.585 19:18:24 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:47.585 19:18:24 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:17:47.585 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.585 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.585 Malloc6 00:17:47.585 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.585 19:18:24 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:17:47.585 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.585 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.585 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.585 19:18:24 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:17:47.585 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.585 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.585 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.585 19:18:24 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:17:47.585 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.585 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.585 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.586 19:18:24 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:47.586 19:18:24 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:17:47.586 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.586 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.586 Malloc7 00:17:47.586 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.586 19:18:24 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:17:47.586 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.586 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.586 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.586 19:18:24 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:17:47.586 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.586 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.586 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.586 19:18:24 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:17:47.586 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.586 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.586 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.586 19:18:24 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:47.586 19:18:24 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:17:47.586 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.586 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.586 Malloc8 00:17:47.586 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.586 19:18:24 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:17:47.586 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.586 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.586 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.586 19:18:24 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:17:47.586 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.586 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.586 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.586 19:18:24 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:17:47.586 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.586 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.586 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.586 19:18:24 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:47.586 19:18:24 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:17:47.586 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.586 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.586 Malloc9 00:17:47.586 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.586 19:18:24 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:17:47.586 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.586 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.586 19:18:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.586 19:18:24 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:17:47.586 19:18:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.586 19:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:47.845 19:18:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.845 19:18:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:17:47.845 19:18:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.845 19:18:25 -- common/autotest_common.sh@10 -- # set +x 00:17:47.845 19:18:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.845 19:18:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:47.845 19:18:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:17:47.845 19:18:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.845 19:18:25 -- common/autotest_common.sh@10 -- # set +x 00:17:47.845 Malloc10 00:17:47.845 19:18:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.845 19:18:25 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:17:47.845 19:18:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.845 19:18:25 -- common/autotest_common.sh@10 -- # set +x 00:17:47.845 19:18:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.845 19:18:25 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:17:47.845 19:18:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.845 19:18:25 -- common/autotest_common.sh@10 -- # set +x 00:17:47.845 19:18:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.845 19:18:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:17:47.845 19:18:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.845 19:18:25 -- common/autotest_common.sh@10 -- # set +x 00:17:47.845 19:18:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.845 19:18:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:47.845 19:18:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:17:47.845 19:18:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.845 19:18:25 -- common/autotest_common.sh@10 -- # set +x 00:17:47.845 Malloc11 00:17:47.845 19:18:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.845 19:18:25 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:17:47.845 19:18:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.845 19:18:25 -- common/autotest_common.sh@10 -- # set +x 00:17:47.845 19:18:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.845 19:18:25 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:17:47.845 19:18:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.845 19:18:25 -- common/autotest_common.sh@10 -- # set +x 00:17:47.845 19:18:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.845 19:18:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:17:47.845 19:18:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.845 19:18:25 -- common/autotest_common.sh@10 -- # set +x 00:17:47.845 19:18:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.845 19:18:25 -- target/multiconnection.sh@28 -- # seq 1 11 00:17:47.845 19:18:25 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:47.845 19:18:25 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:48.104 19:18:25 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:17:48.104 19:18:25 -- common/autotest_common.sh@1175 -- # local i=0 00:17:48.104 19:18:25 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:17:48.104 19:18:25 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:17:48.104 19:18:25 -- common/autotest_common.sh@1182 -- # sleep 2 00:17:50.033 19:18:27 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:17:50.033 19:18:27 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:17:50.033 19:18:27 -- common/autotest_common.sh@1184 -- # grep -c SPDK1 00:17:50.033 19:18:27 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:17:50.033 19:18:27 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:17:50.033 19:18:27 -- common/autotest_common.sh@1185 -- # return 0 00:17:50.033 19:18:27 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:50.033 19:18:27 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:17:50.292 19:18:27 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:17:50.292 19:18:27 -- common/autotest_common.sh@1175 -- # local i=0 00:17:50.292 19:18:27 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:17:50.292 19:18:27 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:17:50.292 19:18:27 -- common/autotest_common.sh@1182 -- # sleep 2 00:17:52.197 19:18:29 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:17:52.197 19:18:29 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:17:52.197 19:18:29 -- common/autotest_common.sh@1184 -- # grep -c SPDK2 00:17:52.197 19:18:29 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:17:52.197 19:18:29 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:17:52.197 19:18:29 -- common/autotest_common.sh@1185 -- # return 0 00:17:52.197 19:18:29 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:52.197 19:18:29 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:17:52.456 19:18:29 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:17:52.456 19:18:29 -- common/autotest_common.sh@1175 -- # local i=0 00:17:52.456 19:18:29 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:17:52.456 19:18:29 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:17:52.456 19:18:29 -- common/autotest_common.sh@1182 -- # sleep 2 00:17:54.361 19:18:31 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:17:54.361 19:18:31 -- common/autotest_common.sh@1184 -- # grep -c SPDK3 00:17:54.361 19:18:31 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:17:54.361 19:18:31 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:17:54.361 19:18:31 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:17:54.361 19:18:31 -- common/autotest_common.sh@1185 -- # return 0 00:17:54.361 19:18:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:54.361 19:18:31 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:17:54.620 19:18:31 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:17:54.620 19:18:31 -- common/autotest_common.sh@1175 -- # local i=0 00:17:54.620 19:18:31 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:17:54.620 19:18:31 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:17:54.620 19:18:31 -- common/autotest_common.sh@1182 -- # sleep 2 00:17:56.525 19:18:33 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:17:56.525 19:18:33 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:17:56.525 19:18:33 -- common/autotest_common.sh@1184 -- # grep -c SPDK4 00:17:56.525 19:18:33 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:17:56.525 19:18:33 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:17:56.525 19:18:33 -- common/autotest_common.sh@1185 -- # return 0 00:17:56.525 19:18:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:56.525 19:18:33 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:17:56.784 19:18:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:17:56.784 19:18:34 -- common/autotest_common.sh@1175 -- # local i=0 00:17:56.784 19:18:34 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:17:56.784 19:18:34 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:17:56.784 19:18:34 -- common/autotest_common.sh@1182 -- # sleep 2 00:17:58.688 19:18:36 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:17:58.688 19:18:36 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:17:58.688 19:18:36 -- common/autotest_common.sh@1184 -- # grep -c SPDK5 00:17:58.688 19:18:36 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:17:58.688 19:18:36 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:17:58.688 19:18:36 -- common/autotest_common.sh@1185 -- # return 0 00:17:58.688 19:18:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:58.688 19:18:36 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:17:58.947 19:18:36 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:17:58.947 19:18:36 -- common/autotest_common.sh@1175 -- # local i=0 00:17:58.947 19:18:36 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:17:58.947 19:18:36 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:17:58.947 19:18:36 -- common/autotest_common.sh@1182 -- # sleep 2 00:18:00.848 19:18:38 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:18:01.106 19:18:38 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:18:01.106 19:18:38 -- common/autotest_common.sh@1184 -- # grep -c SPDK6 00:18:01.106 19:18:38 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:18:01.106 19:18:38 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:18:01.106 19:18:38 -- common/autotest_common.sh@1185 -- # return 0 00:18:01.106 19:18:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:01.106 19:18:38 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:18:01.106 19:18:38 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:01.106 19:18:38 -- common/autotest_common.sh@1175 -- # local i=0 00:18:01.106 19:18:38 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:18:01.106 19:18:38 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:18:01.106 19:18:38 -- common/autotest_common.sh@1182 -- # sleep 2 00:18:03.662 19:18:40 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:18:03.662 19:18:40 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:18:03.662 19:18:40 -- common/autotest_common.sh@1184 -- # grep -c SPDK7 00:18:03.662 19:18:40 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:18:03.662 19:18:40 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:18:03.662 19:18:40 -- common/autotest_common.sh@1185 -- # return 0 00:18:03.662 19:18:40 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:03.662 19:18:40 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:18:03.662 19:18:40 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:03.662 19:18:40 -- common/autotest_common.sh@1175 -- # local i=0 00:18:03.662 19:18:40 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:18:03.662 19:18:40 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:18:03.662 19:18:40 -- common/autotest_common.sh@1182 -- # sleep 2 00:18:05.566 19:18:42 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:18:05.566 19:18:42 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:18:05.566 19:18:42 -- common/autotest_common.sh@1184 -- # grep -c SPDK8 00:18:05.566 19:18:42 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:18:05.566 19:18:42 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:18:05.566 19:18:42 -- common/autotest_common.sh@1185 -- # return 0 00:18:05.566 19:18:42 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:05.566 19:18:42 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:18:05.566 19:18:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:05.566 19:18:42 -- common/autotest_common.sh@1175 -- # local i=0 00:18:05.566 19:18:42 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:18:05.566 19:18:42 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:18:05.566 19:18:42 -- common/autotest_common.sh@1182 -- # sleep 2 00:18:07.469 19:18:44 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:18:07.469 19:18:44 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:18:07.469 19:18:44 -- common/autotest_common.sh@1184 -- # grep -c SPDK9 00:18:07.469 19:18:44 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:18:07.469 19:18:44 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:18:07.469 19:18:44 -- common/autotest_common.sh@1185 -- # return 0 00:18:07.469 19:18:44 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:07.469 19:18:44 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:18:07.728 19:18:45 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:18:07.728 19:18:45 -- common/autotest_common.sh@1175 -- # local i=0 00:18:07.728 19:18:45 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:18:07.728 19:18:45 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:18:07.728 19:18:45 -- common/autotest_common.sh@1182 -- # sleep 2 00:18:10.260 19:18:47 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:18:10.260 19:18:47 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:18:10.260 19:18:47 -- common/autotest_common.sh@1184 -- # grep -c SPDK10 00:18:10.260 19:18:47 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:18:10.260 19:18:47 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:18:10.260 19:18:47 -- common/autotest_common.sh@1185 -- # return 0 00:18:10.260 19:18:47 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:10.260 19:18:47 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:18:10.260 19:18:47 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:18:10.260 19:18:47 -- common/autotest_common.sh@1175 -- # local i=0 00:18:10.260 19:18:47 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:18:10.260 19:18:47 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:18:10.260 19:18:47 -- common/autotest_common.sh@1182 -- # sleep 2 00:18:12.165 19:18:49 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:18:12.165 19:18:49 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:18:12.165 19:18:49 -- common/autotest_common.sh@1184 -- # grep -c SPDK11 00:18:12.165 19:18:49 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:18:12.165 19:18:49 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:18:12.165 19:18:49 -- common/autotest_common.sh@1185 -- # return 0 00:18:12.165 19:18:49 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:18:12.165 [global] 00:18:12.165 thread=1 00:18:12.165 invalidate=1 00:18:12.165 rw=read 00:18:12.165 time_based=1 00:18:12.165 runtime=10 00:18:12.165 ioengine=libaio 00:18:12.165 direct=1 00:18:12.165 bs=262144 00:18:12.165 iodepth=64 00:18:12.165 norandommap=1 00:18:12.165 numjobs=1 00:18:12.165 00:18:12.165 [job0] 00:18:12.165 filename=/dev/nvme0n1 00:18:12.165 [job1] 00:18:12.165 filename=/dev/nvme10n1 00:18:12.165 [job2] 00:18:12.165 filename=/dev/nvme1n1 00:18:12.165 [job3] 00:18:12.165 filename=/dev/nvme2n1 00:18:12.165 [job4] 00:18:12.165 filename=/dev/nvme3n1 00:18:12.165 [job5] 00:18:12.165 filename=/dev/nvme4n1 00:18:12.165 [job6] 00:18:12.165 filename=/dev/nvme5n1 00:18:12.165 [job7] 00:18:12.165 filename=/dev/nvme6n1 00:18:12.165 [job8] 00:18:12.165 filename=/dev/nvme7n1 00:18:12.165 [job9] 00:18:12.165 filename=/dev/nvme8n1 00:18:12.165 [job10] 00:18:12.165 filename=/dev/nvme9n1 00:18:12.165 Could not set queue depth (nvme0n1) 00:18:12.165 Could not set queue depth (nvme10n1) 00:18:12.165 Could not set queue depth (nvme1n1) 00:18:12.165 Could not set queue depth (nvme2n1) 00:18:12.165 Could not set queue depth (nvme3n1) 00:18:12.165 Could not set queue depth (nvme4n1) 00:18:12.165 Could not set queue depth (nvme5n1) 00:18:12.165 Could not set queue depth (nvme6n1) 00:18:12.165 Could not set queue depth (nvme7n1) 00:18:12.165 Could not set queue depth (nvme8n1) 00:18:12.165 Could not set queue depth (nvme9n1) 00:18:12.165 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:12.165 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:12.165 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:12.165 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:12.165 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:12.165 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:12.165 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:12.165 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:12.165 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:12.165 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:12.165 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:12.165 fio-3.35 00:18:12.165 Starting 11 threads 00:18:24.370 00:18:24.370 job0: (groupid=0, jobs=1): err= 0: pid=79062: Wed Feb 14 19:18:59 2024 00:18:24.370 read: IOPS=599, BW=150MiB/s (157MB/s)(1513MiB/10090msec) 00:18:24.370 slat (usec): min=22, max=90489, avg=1648.24, stdev=6271.34 00:18:24.370 clat (msec): min=22, max=194, avg=104.82, stdev=17.38 00:18:24.370 lat (msec): min=22, max=205, avg=106.47, stdev=18.39 00:18:24.370 clat percentiles (msec): 00:18:24.370 | 1.00th=[ 64], 5.00th=[ 77], 10.00th=[ 84], 20.00th=[ 91], 00:18:24.370 | 30.00th=[ 96], 40.00th=[ 103], 50.00th=[ 107], 60.00th=[ 111], 00:18:24.370 | 70.00th=[ 114], 80.00th=[ 118], 90.00th=[ 125], 95.00th=[ 131], 00:18:24.370 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 194], 99.95th=[ 194], 00:18:24.370 | 99.99th=[ 194] 00:18:24.370 bw ( KiB/s): min=127488, max=211968, per=8.53%, avg=153298.30, stdev=21598.51, samples=20 00:18:24.370 iops : min= 498, max= 828, avg=598.70, stdev=84.39, samples=20 00:18:24.370 lat (msec) : 50=0.35%, 100=36.06%, 250=63.59% 00:18:24.370 cpu : usr=0.25%, sys=2.06%, ctx=1125, majf=0, minf=4097 00:18:24.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:24.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:24.370 issued rwts: total=6053,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.370 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:24.370 job1: (groupid=0, jobs=1): err= 0: pid=79063: Wed Feb 14 19:18:59 2024 00:18:24.370 read: IOPS=885, BW=221MiB/s (232MB/s)(2229MiB/10070msec) 00:18:24.370 slat (usec): min=16, max=109381, avg=1101.15, stdev=4166.60 00:18:24.370 clat (msec): min=9, max=209, avg=71.07, stdev=25.04 00:18:24.370 lat (msec): min=9, max=248, avg=72.17, stdev=25.58 00:18:24.370 clat percentiles (msec): 00:18:24.370 | 1.00th=[ 21], 5.00th=[ 27], 10.00th=[ 31], 20.00th=[ 50], 00:18:24.370 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 78], 60.00th=[ 83], 00:18:24.370 | 70.00th=[ 87], 80.00th=[ 92], 90.00th=[ 97], 95.00th=[ 103], 00:18:24.370 | 99.00th=[ 126], 99.50th=[ 138], 99.90th=[ 197], 99.95th=[ 197], 00:18:24.370 | 99.99th=[ 209] 00:18:24.370 bw ( KiB/s): min=160958, max=509952, per=12.61%, avg=226465.05, stdev=88761.64, samples=20 00:18:24.370 iops : min= 628, max= 1992, avg=884.50, stdev=346.76, samples=20 00:18:24.370 lat (msec) : 10=0.04%, 20=0.94%, 50=19.33%, 100=73.13%, 250=6.55% 00:18:24.370 cpu : usr=0.36%, sys=3.05%, ctx=1776, majf=0, minf=4097 00:18:24.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:24.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:24.370 issued rwts: total=8914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.370 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:24.370 job2: (groupid=0, jobs=1): err= 0: pid=79064: Wed Feb 14 19:18:59 2024 00:18:24.370 read: IOPS=534, BW=134MiB/s (140MB/s)(1350MiB/10098msec) 00:18:24.370 slat (usec): min=21, max=72534, avg=1852.85, stdev=6426.18 00:18:24.370 clat (msec): min=29, max=186, avg=117.54, stdev=14.22 00:18:24.370 lat (msec): min=30, max=224, avg=119.39, stdev=15.41 00:18:24.370 clat percentiles (msec): 00:18:24.370 | 1.00th=[ 92], 5.00th=[ 99], 10.00th=[ 103], 20.00th=[ 107], 00:18:24.370 | 30.00th=[ 110], 40.00th=[ 113], 50.00th=[ 116], 60.00th=[ 120], 00:18:24.370 | 70.00th=[ 124], 80.00th=[ 128], 90.00th=[ 136], 95.00th=[ 144], 00:18:24.370 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 182], 99.95th=[ 182], 00:18:24.370 | 99.99th=[ 186] 00:18:24.370 bw ( KiB/s): min=99328, max=153600, per=7.60%, avg=136520.15, stdev=12082.28, samples=20 00:18:24.370 iops : min= 388, max= 600, avg=533.10, stdev=47.22, samples=20 00:18:24.370 lat (msec) : 50=0.11%, 100=6.22%, 250=93.67% 00:18:24.370 cpu : usr=0.23%, sys=1.74%, ctx=1031, majf=0, minf=4097 00:18:24.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:24.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:24.370 issued rwts: total=5400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.370 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:24.370 job3: (groupid=0, jobs=1): err= 0: pid=79065: Wed Feb 14 19:18:59 2024 00:18:24.371 read: IOPS=563, BW=141MiB/s (148MB/s)(1423MiB/10092msec) 00:18:24.371 slat (usec): min=22, max=128024, avg=1731.21, stdev=6639.22 00:18:24.371 clat (msec): min=23, max=312, avg=111.58, stdev=23.62 00:18:24.371 lat (msec): min=25, max=312, avg=113.31, stdev=24.52 00:18:24.371 clat percentiles (msec): 00:18:24.371 | 1.00th=[ 73], 5.00th=[ 84], 10.00th=[ 89], 20.00th=[ 96], 00:18:24.371 | 30.00th=[ 103], 40.00th=[ 107], 50.00th=[ 111], 60.00th=[ 114], 00:18:24.371 | 70.00th=[ 118], 80.00th=[ 123], 90.00th=[ 129], 95.00th=[ 142], 00:18:24.371 | 99.00th=[ 218], 99.50th=[ 275], 99.90th=[ 309], 99.95th=[ 309], 00:18:24.371 | 99.99th=[ 313] 00:18:24.371 bw ( KiB/s): min=90624, max=180224, per=8.02%, avg=144082.75, stdev=22227.09, samples=20 00:18:24.371 iops : min= 354, max= 704, avg=562.70, stdev=86.83, samples=20 00:18:24.371 lat (msec) : 50=0.25%, 100=26.24%, 250=72.83%, 500=0.69% 00:18:24.371 cpu : usr=0.24%, sys=2.23%, ctx=1072, majf=0, minf=4097 00:18:24.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:24.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:24.371 issued rwts: total=5690,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.371 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:24.371 job4: (groupid=0, jobs=1): err= 0: pid=79066: Wed Feb 14 19:18:59 2024 00:18:24.371 read: IOPS=639, BW=160MiB/s (168MB/s)(1615MiB/10099msec) 00:18:24.371 slat (usec): min=15, max=95050, avg=1505.20, stdev=5791.86 00:18:24.371 clat (msec): min=25, max=210, avg=98.33, stdev=31.45 00:18:24.371 lat (msec): min=25, max=242, avg=99.84, stdev=32.32 00:18:24.371 clat percentiles (msec): 00:18:24.371 | 1.00th=[ 47], 5.00th=[ 52], 10.00th=[ 56], 20.00th=[ 62], 00:18:24.371 | 30.00th=[ 68], 40.00th=[ 103], 50.00th=[ 110], 60.00th=[ 114], 00:18:24.371 | 70.00th=[ 117], 80.00th=[ 123], 90.00th=[ 130], 95.00th=[ 142], 00:18:24.371 | 99.00th=[ 178], 99.50th=[ 178], 99.90th=[ 211], 99.95th=[ 211], 00:18:24.371 | 99.99th=[ 211] 00:18:24.371 bw ( KiB/s): min=100864, max=278528, per=9.11%, avg=163642.30, stdev=52615.63, samples=20 00:18:24.371 iops : min= 394, max= 1088, avg=639.10, stdev=205.57, samples=20 00:18:24.371 lat (msec) : 50=3.58%, 100=34.65%, 250=61.77% 00:18:24.371 cpu : usr=0.24%, sys=2.47%, ctx=1182, majf=0, minf=4097 00:18:24.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:18:24.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:24.371 issued rwts: total=6459,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.371 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:24.371 job5: (groupid=0, jobs=1): err= 0: pid=79067: Wed Feb 14 19:18:59 2024 00:18:24.371 read: IOPS=524, BW=131MiB/s (138MB/s)(1323MiB/10091msec) 00:18:24.371 slat (usec): min=22, max=89603, avg=1885.59, stdev=6506.09 00:18:24.371 clat (msec): min=37, max=202, avg=119.90, stdev=15.92 00:18:24.371 lat (msec): min=37, max=236, avg=121.79, stdev=17.05 00:18:24.371 clat percentiles (msec): 00:18:24.371 | 1.00th=[ 81], 5.00th=[ 101], 10.00th=[ 105], 20.00th=[ 110], 00:18:24.371 | 30.00th=[ 113], 40.00th=[ 116], 50.00th=[ 120], 60.00th=[ 123], 00:18:24.371 | 70.00th=[ 125], 80.00th=[ 129], 90.00th=[ 138], 95.00th=[ 150], 00:18:24.371 | 99.00th=[ 169], 99.50th=[ 180], 99.90th=[ 192], 99.95th=[ 203], 00:18:24.371 | 99.99th=[ 203] 00:18:24.371 bw ( KiB/s): min=112352, max=148694, per=7.45%, avg=133902.05, stdev=9563.61, samples=20 00:18:24.371 iops : min= 438, max= 580, avg=522.95, stdev=37.38, samples=20 00:18:24.371 lat (msec) : 50=0.79%, 100=4.35%, 250=94.86% 00:18:24.371 cpu : usr=0.19%, sys=2.01%, ctx=991, majf=0, minf=4097 00:18:24.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:24.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:24.371 issued rwts: total=5293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.371 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:24.371 job6: (groupid=0, jobs=1): err= 0: pid=79068: Wed Feb 14 19:18:59 2024 00:18:24.371 read: IOPS=532, BW=133MiB/s (140MB/s)(1349MiB/10138msec) 00:18:24.371 slat (usec): min=20, max=112716, avg=1823.81, stdev=6410.69 00:18:24.371 clat (msec): min=15, max=293, avg=118.23, stdev=24.50 00:18:24.371 lat (msec): min=16, max=304, avg=120.05, stdev=25.26 00:18:24.371 clat percentiles (msec): 00:18:24.371 | 1.00th=[ 45], 5.00th=[ 92], 10.00th=[ 101], 20.00th=[ 107], 00:18:24.371 | 30.00th=[ 111], 40.00th=[ 115], 50.00th=[ 117], 60.00th=[ 121], 00:18:24.371 | 70.00th=[ 124], 80.00th=[ 128], 90.00th=[ 134], 95.00th=[ 142], 00:18:24.371 | 99.00th=[ 255], 99.50th=[ 275], 99.90th=[ 292], 99.95th=[ 296], 00:18:24.371 | 99.99th=[ 296] 00:18:24.371 bw ( KiB/s): min=110592, max=175104, per=7.60%, avg=136441.40, stdev=12446.03, samples=20 00:18:24.371 iops : min= 432, max= 684, avg=532.80, stdev=48.63, samples=20 00:18:24.371 lat (msec) : 20=0.19%, 50=1.06%, 100=8.10%, 250=89.64%, 500=1.02% 00:18:24.371 cpu : usr=0.23%, sys=1.88%, ctx=1106, majf=0, minf=4097 00:18:24.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:24.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:24.371 issued rwts: total=5396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.371 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:24.371 job7: (groupid=0, jobs=1): err= 0: pid=79069: Wed Feb 14 19:18:59 2024 00:18:24.371 read: IOPS=842, BW=211MiB/s (221MB/s)(2119MiB/10057msec) 00:18:24.371 slat (usec): min=21, max=47115, avg=1176.65, stdev=4073.70 00:18:24.371 clat (msec): min=18, max=128, avg=74.66, stdev=14.09 00:18:24.371 lat (msec): min=19, max=129, avg=75.83, stdev=14.54 00:18:24.371 clat percentiles (msec): 00:18:24.371 | 1.00th=[ 45], 5.00th=[ 54], 10.00th=[ 57], 20.00th=[ 62], 00:18:24.371 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 77], 60.00th=[ 80], 00:18:24.371 | 70.00th=[ 83], 80.00th=[ 88], 90.00th=[ 92], 95.00th=[ 96], 00:18:24.371 | 99.00th=[ 105], 99.50th=[ 111], 99.90th=[ 125], 99.95th=[ 129], 00:18:24.371 | 99.99th=[ 129] 00:18:24.371 bw ( KiB/s): min=181760, max=263695, per=11.99%, avg=215376.90, stdev=30991.17, samples=20 00:18:24.371 iops : min= 710, max= 1030, avg=841.20, stdev=120.98, samples=20 00:18:24.371 lat (msec) : 20=0.04%, 50=2.98%, 100=95.28%, 250=1.70% 00:18:24.371 cpu : usr=0.36%, sys=2.68%, ctx=1450, majf=0, minf=4097 00:18:24.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:24.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:24.371 issued rwts: total=8476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.371 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:24.371 job8: (groupid=0, jobs=1): err= 0: pid=79070: Wed Feb 14 19:18:59 2024 00:18:24.371 read: IOPS=516, BW=129MiB/s (135MB/s)(1302MiB/10084msec) 00:18:24.371 slat (usec): min=22, max=132262, avg=1914.53, stdev=7185.32 00:18:24.371 clat (msec): min=64, max=267, avg=121.85, stdev=15.20 00:18:24.371 lat (msec): min=64, max=280, avg=123.77, stdev=16.68 00:18:24.371 clat percentiles (msec): 00:18:24.371 | 1.00th=[ 87], 5.00th=[ 103], 10.00th=[ 106], 20.00th=[ 111], 00:18:24.371 | 30.00th=[ 115], 40.00th=[ 118], 50.00th=[ 122], 60.00th=[ 124], 00:18:24.371 | 70.00th=[ 127], 80.00th=[ 132], 90.00th=[ 138], 95.00th=[ 148], 00:18:24.371 | 99.00th=[ 167], 99.50th=[ 171], 99.90th=[ 201], 99.95th=[ 268], 00:18:24.371 | 99.99th=[ 268] 00:18:24.371 bw ( KiB/s): min=96063, max=144896, per=7.33%, avg=131688.70, stdev=9933.57, samples=20 00:18:24.371 iops : min= 375, max= 566, avg=514.35, stdev=38.83, samples=20 00:18:24.371 lat (msec) : 100=3.84%, 250=96.08%, 500=0.08% 00:18:24.371 cpu : usr=0.20%, sys=2.11%, ctx=923, majf=0, minf=4097 00:18:24.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:24.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:24.371 issued rwts: total=5208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.371 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:24.371 job9: (groupid=0, jobs=1): err= 0: pid=79071: Wed Feb 14 19:18:59 2024 00:18:24.371 read: IOPS=583, BW=146MiB/s (153MB/s)(1473MiB/10088msec) 00:18:24.371 slat (usec): min=22, max=90934, avg=1681.49, stdev=6000.51 00:18:24.371 clat (msec): min=19, max=213, avg=107.75, stdev=18.26 00:18:24.371 lat (msec): min=20, max=213, avg=109.43, stdev=19.19 00:18:24.371 clat percentiles (msec): 00:18:24.371 | 1.00th=[ 62], 5.00th=[ 82], 10.00th=[ 87], 20.00th=[ 93], 00:18:24.371 | 30.00th=[ 99], 40.00th=[ 105], 50.00th=[ 110], 60.00th=[ 114], 00:18:24.371 | 70.00th=[ 118], 80.00th=[ 123], 90.00th=[ 127], 95.00th=[ 136], 00:18:24.371 | 99.00th=[ 153], 99.50th=[ 165], 99.90th=[ 176], 99.95th=[ 199], 00:18:24.371 | 99.99th=[ 213] 00:18:24.371 bw ( KiB/s): min=126464, max=195072, per=8.30%, avg=149117.55, stdev=19614.88, samples=20 00:18:24.371 iops : min= 494, max= 762, avg=582.35, stdev=76.62, samples=20 00:18:24.371 lat (msec) : 20=0.02%, 50=0.73%, 100=31.60%, 250=67.66% 00:18:24.371 cpu : usr=0.21%, sys=1.92%, ctx=1124, majf=0, minf=4097 00:18:24.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:18:24.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:24.371 issued rwts: total=5890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.371 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:24.371 job10: (groupid=0, jobs=1): err= 0: pid=79072: Wed Feb 14 19:18:59 2024 00:18:24.371 read: IOPS=831, BW=208MiB/s (218MB/s)(2090MiB/10058msec) 00:18:24.371 slat (usec): min=22, max=39615, avg=1192.83, stdev=4171.56 00:18:24.371 clat (msec): min=16, max=145, avg=75.67, stdev=15.64 00:18:24.371 lat (msec): min=17, max=145, avg=76.87, stdev=16.22 00:18:24.371 clat percentiles (msec): 00:18:24.371 | 1.00th=[ 45], 5.00th=[ 52], 10.00th=[ 56], 20.00th=[ 61], 00:18:24.372 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 83], 00:18:24.372 | 70.00th=[ 86], 80.00th=[ 90], 90.00th=[ 94], 95.00th=[ 99], 00:18:24.372 | 99.00th=[ 108], 99.50th=[ 114], 99.90th=[ 127], 99.95th=[ 127], 00:18:24.372 | 99.99th=[ 146] 00:18:24.372 bw ( KiB/s): min=175104, max=272384, per=11.82%, avg=212380.20, stdev=37584.26, samples=20 00:18:24.372 iops : min= 684, max= 1064, avg=829.50, stdev=146.75, samples=20 00:18:24.372 lat (msec) : 20=0.20%, 50=3.23%, 100=93.47%, 250=3.10% 00:18:24.372 cpu : usr=0.33%, sys=3.08%, ctx=1654, majf=0, minf=4097 00:18:24.372 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:24.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:24.372 issued rwts: total=8360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.372 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:24.372 00:18:24.372 Run status group 0 (all jobs): 00:18:24.372 READ: bw=1754MiB/s (1839MB/s), 129MiB/s-221MiB/s (135MB/s-232MB/s), io=17.4GiB (18.6GB), run=10057-10138msec 00:18:24.372 00:18:24.372 Disk stats (read/write): 00:18:24.372 nvme0n1: ios=11978/0, merge=0/0, ticks=1239456/0, in_queue=1239456, util=97.55% 00:18:24.372 nvme10n1: ios=17740/0, merge=0/0, ticks=1236631/0, in_queue=1236631, util=97.73% 00:18:24.372 nvme1n1: ios=10672/0, merge=0/0, ticks=1240563/0, in_queue=1240563, util=97.79% 00:18:24.372 nvme2n1: ios=11264/0, merge=0/0, ticks=1241708/0, in_queue=1241708, util=98.08% 00:18:24.372 nvme3n1: ios=12806/0, merge=0/0, ticks=1235863/0, in_queue=1235863, util=98.08% 00:18:24.372 nvme4n1: ios=10459/0, merge=0/0, ticks=1238497/0, in_queue=1238497, util=98.34% 00:18:24.372 nvme5n1: ios=10697/0, merge=0/0, ticks=1241297/0, in_queue=1241297, util=98.38% 00:18:24.372 nvme6n1: ios=16825/0, merge=0/0, ticks=1235493/0, in_queue=1235493, util=98.12% 00:18:24.372 nvme7n1: ios=10289/0, merge=0/0, ticks=1240637/0, in_queue=1240637, util=98.77% 00:18:24.372 nvme8n1: ios=11653/0, merge=0/0, ticks=1236779/0, in_queue=1236779, util=98.69% 00:18:24.372 nvme9n1: ios=16592/0, merge=0/0, ticks=1238222/0, in_queue=1238222, util=99.05% 00:18:24.372 19:19:00 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:18:24.372 [global] 00:18:24.372 thread=1 00:18:24.372 invalidate=1 00:18:24.372 rw=randwrite 00:18:24.372 time_based=1 00:18:24.372 runtime=10 00:18:24.372 ioengine=libaio 00:18:24.372 direct=1 00:18:24.372 bs=262144 00:18:24.372 iodepth=64 00:18:24.372 norandommap=1 00:18:24.372 numjobs=1 00:18:24.372 00:18:24.372 [job0] 00:18:24.372 filename=/dev/nvme0n1 00:18:24.372 [job1] 00:18:24.372 filename=/dev/nvme10n1 00:18:24.372 [job2] 00:18:24.372 filename=/dev/nvme1n1 00:18:24.372 [job3] 00:18:24.372 filename=/dev/nvme2n1 00:18:24.372 [job4] 00:18:24.372 filename=/dev/nvme3n1 00:18:24.372 [job5] 00:18:24.372 filename=/dev/nvme4n1 00:18:24.372 [job6] 00:18:24.372 filename=/dev/nvme5n1 00:18:24.372 [job7] 00:18:24.372 filename=/dev/nvme6n1 00:18:24.372 [job8] 00:18:24.372 filename=/dev/nvme7n1 00:18:24.372 [job9] 00:18:24.372 filename=/dev/nvme8n1 00:18:24.372 [job10] 00:18:24.372 filename=/dev/nvme9n1 00:18:24.372 Could not set queue depth (nvme0n1) 00:18:24.372 Could not set queue depth (nvme10n1) 00:18:24.372 Could not set queue depth (nvme1n1) 00:18:24.372 Could not set queue depth (nvme2n1) 00:18:24.372 Could not set queue depth (nvme3n1) 00:18:24.372 Could not set queue depth (nvme4n1) 00:18:24.372 Could not set queue depth (nvme5n1) 00:18:24.372 Could not set queue depth (nvme6n1) 00:18:24.372 Could not set queue depth (nvme7n1) 00:18:24.372 Could not set queue depth (nvme8n1) 00:18:24.372 Could not set queue depth (nvme9n1) 00:18:24.372 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:24.372 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:24.372 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:24.372 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:24.372 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:24.372 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:24.372 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:24.372 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:24.372 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:24.372 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:24.372 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:24.372 fio-3.35 00:18:24.372 Starting 11 threads 00:18:34.354 00:18:34.354 job0: (groupid=0, jobs=1): err= 0: pid=79272: Wed Feb 14 19:19:10 2024 00:18:34.354 write: IOPS=268, BW=67.0MiB/s (70.3MB/s)(685MiB/10222msec); 0 zone resets 00:18:34.354 slat (usec): min=20, max=93464, avg=3644.35, stdev=7112.03 00:18:34.354 clat (msec): min=96, max=544, avg=234.91, stdev=43.46 00:18:34.354 lat (msec): min=96, max=544, avg=238.55, stdev=43.36 00:18:34.354 clat percentiles (msec): 00:18:34.354 | 1.00th=[ 171], 5.00th=[ 190], 10.00th=[ 199], 20.00th=[ 211], 00:18:34.354 | 30.00th=[ 220], 40.00th=[ 228], 50.00th=[ 230], 60.00th=[ 232], 00:18:34.354 | 70.00th=[ 236], 80.00th=[ 243], 90.00th=[ 271], 95.00th=[ 330], 00:18:34.354 | 99.00th=[ 409], 99.50th=[ 489], 99.90th=[ 527], 99.95th=[ 542], 00:18:34.354 | 99.99th=[ 542] 00:18:34.354 bw ( KiB/s): min=47104, max=75776, per=4.36%, avg=68548.50, stdev=8315.52, samples=20 00:18:34.354 iops : min= 184, max= 296, avg=267.70, stdev=32.49, samples=20 00:18:34.354 lat (msec) : 100=0.15%, 250=88.62%, 500=10.87%, 750=0.36% 00:18:34.354 cpu : usr=0.68%, sys=1.04%, ctx=2410, majf=0, minf=1 00:18:34.354 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:18:34.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:34.354 issued rwts: total=0,2741,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.354 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:34.354 job1: (groupid=0, jobs=1): err= 0: pid=79273: Wed Feb 14 19:19:10 2024 00:18:34.354 write: IOPS=271, BW=67.8MiB/s (71.1MB/s)(694MiB/10233msec); 0 zone resets 00:18:34.354 slat (usec): min=24, max=55730, avg=3599.75, stdev=7058.63 00:18:34.354 clat (msec): min=7, max=529, avg=232.04, stdev=45.65 00:18:34.354 lat (msec): min=7, max=529, avg=235.64, stdev=45.61 00:18:34.354 clat percentiles (msec): 00:18:34.354 | 1.00th=[ 56], 5.00th=[ 188], 10.00th=[ 197], 20.00th=[ 209], 00:18:34.354 | 30.00th=[ 218], 40.00th=[ 224], 50.00th=[ 232], 60.00th=[ 236], 00:18:34.354 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 279], 95.00th=[ 309], 00:18:34.354 | 99.00th=[ 397], 99.50th=[ 477], 99.90th=[ 514], 99.95th=[ 531], 00:18:34.354 | 99.99th=[ 531] 00:18:34.354 bw ( KiB/s): min=53354, max=77979, per=4.42%, avg=69541.65, stdev=6959.17, samples=20 00:18:34.354 iops : min= 208, max= 304, avg=271.30, stdev=27.19, samples=20 00:18:34.354 lat (msec) : 10=0.29%, 50=0.58%, 100=0.72%, 250=81.53%, 500=16.67% 00:18:34.354 lat (msec) : 750=0.22% 00:18:34.354 cpu : usr=0.61%, sys=0.90%, ctx=3036, majf=0, minf=1 00:18:34.354 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:18:34.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:34.354 issued rwts: total=0,2777,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.354 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:34.354 job2: (groupid=0, jobs=1): err= 0: pid=79285: Wed Feb 14 19:19:10 2024 00:18:34.354 write: IOPS=265, BW=66.5MiB/s (69.7MB/s)(680MiB/10223msec); 0 zone resets 00:18:34.354 slat (usec): min=21, max=55698, avg=3670.77, stdev=7146.16 00:18:34.354 clat (msec): min=57, max=525, avg=236.82, stdev=42.92 00:18:34.354 lat (msec): min=57, max=544, avg=240.50, stdev=42.79 00:18:34.354 clat percentiles (msec): 00:18:34.354 | 1.00th=[ 122], 5.00th=[ 190], 10.00th=[ 203], 20.00th=[ 215], 00:18:34.354 | 30.00th=[ 224], 40.00th=[ 230], 50.00th=[ 234], 60.00th=[ 239], 00:18:34.354 | 70.00th=[ 243], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 326], 00:18:34.354 | 99.00th=[ 409], 99.50th=[ 468], 99.90th=[ 527], 99.95th=[ 527], 00:18:34.354 | 99.99th=[ 527] 00:18:34.354 bw ( KiB/s): min=50688, max=75776, per=4.32%, avg=67986.20, stdev=6742.30, samples=20 00:18:34.354 iops : min= 198, max= 296, avg=265.55, stdev=26.32, samples=20 00:18:34.354 lat (msec) : 100=0.74%, 250=87.64%, 500=11.33%, 750=0.29% 00:18:34.354 cpu : usr=0.67%, sys=1.05%, ctx=1975, majf=0, minf=1 00:18:34.354 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:18:34.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:34.354 issued rwts: total=0,2719,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.354 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:34.354 job3: (groupid=0, jobs=1): err= 0: pid=79286: Wed Feb 14 19:19:10 2024 00:18:34.354 write: IOPS=261, BW=65.4MiB/s (68.6MB/s)(669MiB/10225msec); 0 zone resets 00:18:34.354 slat (usec): min=21, max=71285, avg=3733.23, stdev=7450.29 00:18:34.354 clat (msec): min=25, max=531, avg=240.69, stdev=45.64 00:18:34.354 lat (msec): min=25, max=532, avg=244.42, stdev=45.52 00:18:34.354 clat percentiles (msec): 00:18:34.354 | 1.00th=[ 73], 5.00th=[ 192], 10.00th=[ 203], 20.00th=[ 220], 00:18:34.354 | 30.00th=[ 230], 40.00th=[ 236], 50.00th=[ 241], 60.00th=[ 243], 00:18:34.354 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 275], 95.00th=[ 326], 00:18:34.354 | 99.00th=[ 397], 99.50th=[ 477], 99.90th=[ 514], 99.95th=[ 531], 00:18:34.354 | 99.99th=[ 531] 00:18:34.354 bw ( KiB/s): min=51200, max=75776, per=4.25%, avg=66892.80, stdev=6590.96, samples=20 00:18:34.354 iops : min= 200, max= 296, avg=261.30, stdev=25.75, samples=20 00:18:34.354 lat (msec) : 50=0.60%, 100=0.75%, 250=77.84%, 500=20.59%, 750=0.22% 00:18:34.354 cpu : usr=0.67%, sys=0.65%, ctx=2507, majf=0, minf=1 00:18:34.354 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:18:34.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:34.354 issued rwts: total=0,2676,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.354 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:34.354 job4: (groupid=0, jobs=1): err= 0: pid=79287: Wed Feb 14 19:19:10 2024 00:18:34.354 write: IOPS=772, BW=193MiB/s (203MB/s)(1947MiB/10081msec); 0 zone resets 00:18:34.354 slat (usec): min=18, max=9013, avg=1268.53, stdev=2152.06 00:18:34.355 clat (msec): min=6, max=168, avg=81.52, stdev= 7.91 00:18:34.355 lat (msec): min=7, max=168, avg=82.79, stdev= 7.75 00:18:34.355 clat percentiles (msec): 00:18:34.355 | 1.00th=[ 71], 5.00th=[ 75], 10.00th=[ 77], 20.00th=[ 79], 00:18:34.355 | 30.00th=[ 80], 40.00th=[ 81], 50.00th=[ 82], 60.00th=[ 83], 00:18:34.355 | 70.00th=[ 83], 80.00th=[ 84], 90.00th=[ 89], 95.00th=[ 93], 00:18:34.355 | 99.00th=[ 101], 99.50th=[ 112], 99.90th=[ 159], 99.95th=[ 163], 00:18:34.355 | 99.99th=[ 169] 00:18:34.355 bw ( KiB/s): min=171008, max=206848, per=12.57%, avg=197765.55, stdev=9029.87, samples=20 00:18:34.355 iops : min= 668, max= 808, avg=772.50, stdev=35.27, samples=20 00:18:34.355 lat (msec) : 10=0.05%, 20=0.10%, 50=0.54%, 100=98.32%, 250=0.99% 00:18:34.355 cpu : usr=1.22%, sys=2.26%, ctx=9605, majf=0, minf=1 00:18:34.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:34.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:34.355 issued rwts: total=0,7789,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.355 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:34.355 job5: (groupid=0, jobs=1): err= 0: pid=79288: Wed Feb 14 19:19:10 2024 00:18:34.355 write: IOPS=778, BW=195MiB/s (204MB/s)(1961MiB/10074msec); 0 zone resets 00:18:34.355 slat (usec): min=19, max=21555, avg=1269.52, stdev=2152.24 00:18:34.355 clat (msec): min=13, max=165, avg=80.92, stdev= 8.01 00:18:34.355 lat (msec): min=13, max=165, avg=82.19, stdev= 7.86 00:18:34.355 clat percentiles (msec): 00:18:34.355 | 1.00th=[ 73], 5.00th=[ 75], 10.00th=[ 75], 20.00th=[ 77], 00:18:34.355 | 30.00th=[ 79], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 81], 00:18:34.355 | 70.00th=[ 82], 80.00th=[ 82], 90.00th=[ 91], 95.00th=[ 94], 00:18:34.355 | 99.00th=[ 113], 99.50th=[ 124], 99.90th=[ 155], 99.95th=[ 159], 00:18:34.355 | 99.99th=[ 165] 00:18:34.355 bw ( KiB/s): min=171008, max=207872, per=12.65%, avg=199142.40, stdev=11678.32, samples=20 00:18:34.355 iops : min= 668, max= 812, avg=777.90, stdev=45.62, samples=20 00:18:34.355 lat (msec) : 20=0.05%, 50=0.20%, 100=97.49%, 250=2.26% 00:18:34.355 cpu : usr=1.59%, sys=2.28%, ctx=10029, majf=0, minf=1 00:18:34.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:34.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:34.355 issued rwts: total=0,7842,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.355 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:34.355 job6: (groupid=0, jobs=1): err= 0: pid=79289: Wed Feb 14 19:19:10 2024 00:18:34.355 write: IOPS=318, BW=79.6MiB/s (83.5MB/s)(814MiB/10221msec); 0 zone resets 00:18:34.355 slat (usec): min=19, max=44441, avg=2980.62, stdev=5993.28 00:18:34.355 clat (msec): min=20, max=514, avg=197.83, stdev=68.27 00:18:34.355 lat (msec): min=20, max=514, avg=200.81, stdev=69.03 00:18:34.355 clat percentiles (msec): 00:18:34.355 | 1.00th=[ 74], 5.00th=[ 78], 10.00th=[ 82], 20.00th=[ 146], 00:18:34.355 | 30.00th=[ 194], 40.00th=[ 205], 50.00th=[ 213], 60.00th=[ 222], 00:18:34.355 | 70.00th=[ 230], 80.00th=[ 236], 90.00th=[ 243], 95.00th=[ 305], 00:18:34.355 | 99.00th=[ 359], 99.50th=[ 435], 99.90th=[ 498], 99.95th=[ 514], 00:18:34.355 | 99.99th=[ 514] 00:18:34.355 bw ( KiB/s): min=53248, max=184320, per=5.19%, avg=81726.40, stdev=32089.57, samples=20 00:18:34.355 iops : min= 208, max= 720, avg=319.20, stdev=125.36, samples=20 00:18:34.355 lat (msec) : 50=0.49%, 100=18.27%, 250=72.45%, 500=8.72%, 750=0.06% 00:18:34.355 cpu : usr=0.62%, sys=0.85%, ctx=3495, majf=0, minf=1 00:18:34.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:18:34.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:34.355 issued rwts: total=0,3256,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.355 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:34.355 job7: (groupid=0, jobs=1): err= 0: pid=79290: Wed Feb 14 19:19:10 2024 00:18:34.355 write: IOPS=777, BW=194MiB/s (204MB/s)(1961MiB/10088msec); 0 zone resets 00:18:34.355 slat (usec): min=15, max=18875, avg=1260.20, stdev=2159.21 00:18:34.355 clat (msec): min=4, max=174, avg=81.01, stdev= 9.35 00:18:34.355 lat (msec): min=6, max=174, avg=82.27, stdev= 9.26 00:18:34.355 clat percentiles (msec): 00:18:34.355 | 1.00th=[ 40], 5.00th=[ 75], 10.00th=[ 77], 20.00th=[ 78], 00:18:34.355 | 30.00th=[ 80], 40.00th=[ 81], 50.00th=[ 82], 60.00th=[ 83], 00:18:34.355 | 70.00th=[ 83], 80.00th=[ 84], 90.00th=[ 88], 95.00th=[ 93], 00:18:34.355 | 99.00th=[ 96], 99.50th=[ 116], 99.90th=[ 163], 99.95th=[ 169], 00:18:34.355 | 99.99th=[ 176] 00:18:34.355 bw ( KiB/s): min=173402, max=212905, per=12.66%, avg=199296.90, stdev=9355.26, samples=20 00:18:34.355 iops : min= 677, max= 831, avg=778.30, stdev=36.53, samples=20 00:18:34.355 lat (msec) : 10=0.15%, 20=0.38%, 50=0.71%, 100=98.06%, 250=0.69% 00:18:34.355 cpu : usr=1.14%, sys=1.66%, ctx=9669, majf=0, minf=1 00:18:34.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:34.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:34.355 issued rwts: total=0,7845,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.355 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:34.355 job8: (groupid=0, jobs=1): err= 0: pid=79291: Wed Feb 14 19:19:10 2024 00:18:34.355 write: IOPS=753, BW=188MiB/s (197MB/s)(1899MiB/10084msec); 0 zone resets 00:18:34.355 slat (usec): min=16, max=53800, avg=1276.04, stdev=2445.81 00:18:34.355 clat (msec): min=5, max=243, avg=83.66, stdev=27.03 00:18:34.355 lat (msec): min=7, max=246, avg=84.94, stdev=27.32 00:18:34.355 clat percentiles (msec): 00:18:34.355 | 1.00th=[ 34], 5.00th=[ 73], 10.00th=[ 75], 20.00th=[ 77], 00:18:34.355 | 30.00th=[ 79], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 81], 00:18:34.355 | 70.00th=[ 81], 80.00th=[ 82], 90.00th=[ 93], 95.00th=[ 95], 00:18:34.355 | 99.00th=[ 234], 99.50th=[ 236], 99.90th=[ 241], 99.95th=[ 243], 00:18:34.355 | 99.99th=[ 243] 00:18:34.355 bw ( KiB/s): min=75624, max=220160, per=12.25%, avg=192837.20, stdev=32731.01, samples=20 00:18:34.355 iops : min= 295, max= 860, avg=753.25, stdev=127.93, samples=20 00:18:34.355 lat (msec) : 10=0.07%, 20=0.37%, 50=1.87%, 100=93.38%, 250=4.32% 00:18:34.355 cpu : usr=1.14%, sys=1.34%, ctx=10513, majf=0, minf=1 00:18:34.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:34.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:34.355 issued rwts: total=0,7596,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.355 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:34.355 job9: (groupid=0, jobs=1): err= 0: pid=79292: Wed Feb 14 19:19:10 2024 00:18:34.355 write: IOPS=1078, BW=270MiB/s (283MB/s)(2759MiB/10235msec); 0 zone resets 00:18:34.355 slat (usec): min=14, max=89896, avg=886.90, stdev=2591.79 00:18:34.355 clat (usec): min=1517, max=541559, avg=58428.66, stdev=61673.34 00:18:34.355 lat (usec): min=1566, max=541599, avg=59315.56, stdev=62521.13 00:18:34.355 clat percentiles (msec): 00:18:34.355 | 1.00th=[ 39], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 41], 00:18:34.355 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:18:34.355 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 241], 00:18:34.355 | 99.00th=[ 342], 99.50th=[ 355], 99.90th=[ 477], 99.95th=[ 523], 00:18:34.355 | 99.99th=[ 542] 00:18:34.355 bw ( KiB/s): min=47198, max=389120, per=17.85%, avg=280907.95, stdev=150600.71, samples=20 00:18:34.355 iops : min= 184, max= 1520, avg=1097.15, stdev=588.39, samples=20 00:18:34.355 lat (msec) : 2=0.01%, 4=0.05%, 10=0.14%, 20=0.14%, 50=91.61% 00:18:34.355 lat (msec) : 100=0.97%, 250=3.32%, 500=3.71%, 750=0.05% 00:18:34.355 cpu : usr=1.48%, sys=1.89%, ctx=14541, majf=0, minf=1 00:18:34.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:18:34.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:34.355 issued rwts: total=0,11037,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.355 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:34.355 job10: (groupid=0, jobs=1): err= 0: pid=79293: Wed Feb 14 19:19:10 2024 00:18:34.355 write: IOPS=660, BW=165MiB/s (173MB/s)(1660MiB/10043msec); 0 zone resets 00:18:34.355 slat (usec): min=18, max=56757, avg=1460.22, stdev=4113.49 00:18:34.355 clat (msec): min=7, max=257, avg=95.34, stdev=83.89 00:18:34.355 lat (msec): min=7, max=257, avg=96.80, stdev=85.11 00:18:34.355 clat percentiles (msec): 00:18:34.355 | 1.00th=[ 24], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 41], 00:18:34.355 | 30.00th=[ 42], 40.00th=[ 43], 50.00th=[ 46], 60.00th=[ 48], 00:18:34.355 | 70.00th=[ 52], 80.00th=[ 224], 90.00th=[ 239], 95.00th=[ 243], 00:18:34.355 | 99.00th=[ 251], 99.50th=[ 253], 99.90th=[ 255], 99.95th=[ 257], 00:18:34.355 | 99.99th=[ 257] 00:18:34.355 bw ( KiB/s): min=67584, max=392704, per=10.70%, avg=168320.00, stdev=139253.98, samples=20 00:18:34.355 iops : min= 264, max= 1534, avg=657.50, stdev=543.96, samples=20 00:18:34.355 lat (msec) : 10=0.20%, 20=0.62%, 50=66.93%, 100=3.63%, 250=27.63% 00:18:34.355 lat (msec) : 500=0.99% 00:18:34.355 cpu : usr=0.90%, sys=1.40%, ctx=9165, majf=0, minf=1 00:18:34.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:34.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:34.355 issued rwts: total=0,6638,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.355 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:34.355 00:18:34.355 Run status group 0 (all jobs): 00:18:34.355 WRITE: bw=1537MiB/s (1611MB/s), 65.4MiB/s-270MiB/s (68.6MB/s-283MB/s), io=15.4GiB (16.5GB), run=10043-10235msec 00:18:34.355 00:18:34.355 Disk stats (read/write): 00:18:34.355 nvme0n1: ios=49/5360, merge=0/0, ticks=52/1200614, in_queue=1200666, util=97.78% 00:18:34.355 nvme10n1: ios=49/5428, merge=0/0, ticks=36/1202298, in_queue=1202334, util=98.03% 00:18:34.355 nvme1n1: ios=30/5307, merge=0/0, ticks=24/1199207, in_queue=1199231, util=97.89% 00:18:34.355 nvme2n1: ios=0/5228, merge=0/0, ticks=0/1201352, in_queue=1201352, util=97.99% 00:18:34.355 nvme3n1: ios=0/15431, merge=0/0, ticks=0/1215176, in_queue=1215176, util=97.91% 00:18:34.355 nvme4n1: ios=0/15546, merge=0/0, ticks=0/1214770, in_queue=1214770, util=98.17% 00:18:34.355 nvme5n1: ios=0/6379, merge=0/0, ticks=0/1203210, in_queue=1203210, util=98.23% 00:18:34.355 nvme6n1: ios=4/15575, merge=0/0, ticks=172/1219067, in_queue=1219239, util=98.60% 00:18:34.355 nvme7n1: ios=0/15057, merge=0/0, ticks=0/1217965, in_queue=1217965, util=98.63% 00:18:34.355 nvme8n1: ios=0/21954, merge=0/0, ticks=0/1206337, in_queue=1206337, util=98.95% 00:18:34.355 nvme9n1: ios=0/13119, merge=0/0, ticks=0/1217896, in_queue=1217896, util=98.80% 00:18:34.355 19:19:10 -- target/multiconnection.sh@36 -- # sync 00:18:34.355 19:19:10 -- target/multiconnection.sh@37 -- # seq 1 11 00:18:34.355 19:19:10 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.355 19:19:10 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:34.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:34.355 19:19:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:18:34.355 19:19:11 -- common/autotest_common.sh@1196 -- # local i=0 00:18:34.355 19:19:11 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:18:34.355 19:19:11 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK1 00:18:34.355 19:19:11 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK1 00:18:34.355 19:19:11 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:18:34.355 19:19:11 -- common/autotest_common.sh@1208 -- # return 0 00:18:34.355 19:19:11 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:34.355 19:19:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:34.355 19:19:11 -- common/autotest_common.sh@10 -- # set +x 00:18:34.355 19:19:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:34.355 19:19:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.355 19:19:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:18:34.355 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:18:34.355 19:19:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:18:34.355 19:19:11 -- common/autotest_common.sh@1196 -- # local i=0 00:18:34.355 19:19:11 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:18:34.355 19:19:11 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK2 00:18:34.355 19:19:11 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:18:34.355 19:19:11 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK2 00:18:34.355 19:19:11 -- common/autotest_common.sh@1208 -- # return 0 00:18:34.355 19:19:11 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:34.355 19:19:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:34.355 19:19:11 -- common/autotest_common.sh@10 -- # set +x 00:18:34.355 19:19:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:34.355 19:19:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.355 19:19:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:18:34.355 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:18:34.355 19:19:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:18:34.355 19:19:11 -- common/autotest_common.sh@1196 -- # local i=0 00:18:34.355 19:19:11 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:18:34.355 19:19:11 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK3 00:18:34.355 19:19:11 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK3 00:18:34.355 19:19:11 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:18:34.355 19:19:11 -- common/autotest_common.sh@1208 -- # return 0 00:18:34.355 19:19:11 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:34.355 19:19:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:34.355 19:19:11 -- common/autotest_common.sh@10 -- # set +x 00:18:34.355 19:19:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:34.355 19:19:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.355 19:19:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:18:34.355 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:18:34.355 19:19:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:18:34.355 19:19:11 -- common/autotest_common.sh@1196 -- # local i=0 00:18:34.355 19:19:11 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:18:34.355 19:19:11 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK4 00:18:34.355 19:19:11 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:18:34.355 19:19:11 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK4 00:18:34.355 19:19:11 -- common/autotest_common.sh@1208 -- # return 0 00:18:34.355 19:19:11 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:18:34.355 19:19:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:34.355 19:19:11 -- common/autotest_common.sh@10 -- # set +x 00:18:34.355 19:19:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:34.355 19:19:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.355 19:19:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:18:34.355 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:18:34.355 19:19:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:18:34.355 19:19:11 -- common/autotest_common.sh@1196 -- # local i=0 00:18:34.355 19:19:11 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:18:34.355 19:19:11 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK5 00:18:34.355 19:19:11 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK5 00:18:34.355 19:19:11 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:18:34.355 19:19:11 -- common/autotest_common.sh@1208 -- # return 0 00:18:34.355 19:19:11 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:18:34.355 19:19:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:34.355 19:19:11 -- common/autotest_common.sh@10 -- # set +x 00:18:34.355 19:19:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:34.355 19:19:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.355 19:19:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:18:34.355 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:18:34.355 19:19:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:18:34.355 19:19:11 -- common/autotest_common.sh@1196 -- # local i=0 00:18:34.355 19:19:11 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:18:34.355 19:19:11 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK6 00:18:34.355 19:19:11 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK6 00:18:34.355 19:19:11 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:18:34.355 19:19:11 -- common/autotest_common.sh@1208 -- # return 0 00:18:34.355 19:19:11 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:18:34.355 19:19:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:34.355 19:19:11 -- common/autotest_common.sh@10 -- # set +x 00:18:34.355 19:19:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:34.355 19:19:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.355 19:19:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:18:34.355 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:18:34.355 19:19:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:18:34.355 19:19:11 -- common/autotest_common.sh@1196 -- # local i=0 00:18:34.355 19:19:11 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:18:34.355 19:19:11 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK7 00:18:34.355 19:19:11 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:18:34.355 19:19:11 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK7 00:18:34.355 19:19:11 -- common/autotest_common.sh@1208 -- # return 0 00:18:34.355 19:19:11 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:18:34.355 19:19:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:34.355 19:19:11 -- common/autotest_common.sh@10 -- # set +x 00:18:34.355 19:19:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:34.355 19:19:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.355 19:19:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:18:34.355 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:18:34.355 19:19:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:18:34.355 19:19:11 -- common/autotest_common.sh@1196 -- # local i=0 00:18:34.355 19:19:11 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:18:34.355 19:19:11 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK8 00:18:34.355 19:19:11 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:18:34.355 19:19:11 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK8 00:18:34.614 19:19:11 -- common/autotest_common.sh@1208 -- # return 0 00:18:34.614 19:19:11 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:18:34.614 19:19:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:34.614 19:19:11 -- common/autotest_common.sh@10 -- # set +x 00:18:34.614 19:19:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:34.614 19:19:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.614 19:19:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:18:34.614 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:18:34.614 19:19:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:18:34.614 19:19:11 -- common/autotest_common.sh@1196 -- # local i=0 00:18:34.614 19:19:11 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:18:34.614 19:19:11 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK9 00:18:34.614 19:19:11 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:18:34.614 19:19:11 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK9 00:18:34.614 19:19:11 -- common/autotest_common.sh@1208 -- # return 0 00:18:34.614 19:19:11 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:18:34.614 19:19:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:34.614 19:19:11 -- common/autotest_common.sh@10 -- # set +x 00:18:34.614 19:19:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:34.614 19:19:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.614 19:19:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:18:34.614 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:18:34.614 19:19:12 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:18:34.614 19:19:12 -- common/autotest_common.sh@1196 -- # local i=0 00:18:34.614 19:19:12 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:18:34.614 19:19:12 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK10 00:18:34.614 19:19:12 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:18:34.614 19:19:12 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK10 00:18:34.873 19:19:12 -- common/autotest_common.sh@1208 -- # return 0 00:18:34.873 19:19:12 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:18:34.873 19:19:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:34.873 19:19:12 -- common/autotest_common.sh@10 -- # set +x 00:18:34.873 19:19:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:34.873 19:19:12 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.873 19:19:12 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:18:34.873 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:18:34.873 19:19:12 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:18:34.873 19:19:12 -- common/autotest_common.sh@1196 -- # local i=0 00:18:34.873 19:19:12 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:18:34.873 19:19:12 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK11 00:18:34.873 19:19:12 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:18:34.873 19:19:12 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK11 00:18:34.873 19:19:12 -- common/autotest_common.sh@1208 -- # return 0 00:18:34.873 19:19:12 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:18:34.873 19:19:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:34.873 19:19:12 -- common/autotest_common.sh@10 -- # set +x 00:18:34.873 19:19:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:34.873 19:19:12 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:18:34.873 19:19:12 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:34.873 19:19:12 -- target/multiconnection.sh@47 -- # nvmftestfini 00:18:34.873 19:19:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:34.873 19:19:12 -- nvmf/common.sh@116 -- # sync 00:18:34.873 19:19:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:34.873 19:19:12 -- nvmf/common.sh@119 -- # set +e 00:18:34.873 19:19:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:34.873 19:19:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:34.873 rmmod nvme_tcp 00:18:34.873 rmmod nvme_fabrics 00:18:34.873 rmmod nvme_keyring 00:18:34.873 19:19:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:34.873 19:19:12 -- nvmf/common.sh@123 -- # set -e 00:18:34.873 19:19:12 -- nvmf/common.sh@124 -- # return 0 00:18:34.873 19:19:12 -- nvmf/common.sh@477 -- # '[' -n 78590 ']' 00:18:34.873 19:19:12 -- nvmf/common.sh@478 -- # killprocess 78590 00:18:34.873 19:19:12 -- common/autotest_common.sh@924 -- # '[' -z 78590 ']' 00:18:34.873 19:19:12 -- common/autotest_common.sh@928 -- # kill -0 78590 00:18:34.873 19:19:12 -- common/autotest_common.sh@929 -- # uname 00:18:34.873 19:19:12 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:18:34.873 19:19:12 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 78590 00:18:34.873 killing process with pid 78590 00:18:34.874 19:19:12 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:18:34.874 19:19:12 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:18:34.874 19:19:12 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 78590' 00:18:34.874 19:19:12 -- common/autotest_common.sh@943 -- # kill 78590 00:18:34.874 19:19:12 -- common/autotest_common.sh@948 -- # wait 78590 00:18:35.441 19:19:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:35.441 19:19:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:35.441 19:19:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:35.441 19:19:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:35.441 19:19:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:35.441 19:19:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.441 19:19:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:35.441 19:19:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.441 19:19:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:35.441 00:18:35.441 real 0m49.877s 00:18:35.441 user 2m45.266s 00:18:35.441 sys 0m27.502s 00:18:35.441 19:19:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:35.441 ************************************ 00:18:35.441 END TEST nvmf_multiconnection 00:18:35.441 ************************************ 00:18:35.441 19:19:12 -- common/autotest_common.sh@10 -- # set +x 00:18:35.441 19:19:12 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:35.441 19:19:12 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:18:35.441 19:19:12 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:18:35.441 19:19:12 -- common/autotest_common.sh@10 -- # set +x 00:18:35.441 ************************************ 00:18:35.441 START TEST nvmf_initiator_timeout 00:18:35.441 ************************************ 00:18:35.441 19:19:12 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:35.699 * Looking for test storage... 00:18:35.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:35.699 19:19:12 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:35.699 19:19:12 -- nvmf/common.sh@7 -- # uname -s 00:18:35.699 19:19:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:35.699 19:19:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:35.699 19:19:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:35.699 19:19:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:35.699 19:19:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:35.699 19:19:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:35.699 19:19:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:35.699 19:19:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:35.699 19:19:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:35.699 19:19:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:35.699 19:19:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:18:35.699 19:19:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:18:35.699 19:19:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:35.699 19:19:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:35.699 19:19:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:35.699 19:19:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:35.699 19:19:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:35.699 19:19:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:35.699 19:19:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:35.699 19:19:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.699 19:19:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.699 19:19:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.699 19:19:12 -- paths/export.sh@5 -- # export PATH 00:18:35.699 19:19:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.699 19:19:12 -- nvmf/common.sh@46 -- # : 0 00:18:35.699 19:19:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:35.699 19:19:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:35.699 19:19:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:35.699 19:19:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:35.699 19:19:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:35.699 19:19:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:35.699 19:19:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:35.699 19:19:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:35.699 19:19:12 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:35.699 19:19:12 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:35.699 19:19:12 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:18:35.699 19:19:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:35.699 19:19:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:35.699 19:19:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:35.699 19:19:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:35.699 19:19:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:35.699 19:19:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.699 19:19:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:35.699 19:19:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.699 19:19:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:35.699 19:19:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:35.699 19:19:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:35.699 19:19:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:35.699 19:19:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:35.699 19:19:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:35.699 19:19:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:35.699 19:19:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:35.700 19:19:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:35.700 19:19:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:35.700 19:19:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:35.700 19:19:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:35.700 19:19:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:35.700 19:19:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:35.700 19:19:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:35.700 19:19:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:35.700 19:19:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:35.700 19:19:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:35.700 19:19:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:35.700 19:19:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:35.700 Cannot find device "nvmf_tgt_br" 00:18:35.700 19:19:12 -- nvmf/common.sh@154 -- # true 00:18:35.700 19:19:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:35.700 Cannot find device "nvmf_tgt_br2" 00:18:35.700 19:19:12 -- nvmf/common.sh@155 -- # true 00:18:35.700 19:19:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:35.700 19:19:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:35.700 Cannot find device "nvmf_tgt_br" 00:18:35.700 19:19:12 -- nvmf/common.sh@157 -- # true 00:18:35.700 19:19:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:35.700 Cannot find device "nvmf_tgt_br2" 00:18:35.700 19:19:13 -- nvmf/common.sh@158 -- # true 00:18:35.700 19:19:13 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:35.700 19:19:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:35.700 19:19:13 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:35.700 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:35.700 19:19:13 -- nvmf/common.sh@161 -- # true 00:18:35.700 19:19:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:35.700 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:35.700 19:19:13 -- nvmf/common.sh@162 -- # true 00:18:35.700 19:19:13 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:35.700 19:19:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:35.700 19:19:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:35.700 19:19:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:35.700 19:19:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:35.958 19:19:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:35.958 19:19:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:35.958 19:19:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:35.958 19:19:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:35.958 19:19:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:35.958 19:19:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:35.958 19:19:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:35.958 19:19:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:35.958 19:19:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:35.958 19:19:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:35.958 19:19:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:35.958 19:19:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:35.958 19:19:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:35.958 19:19:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:35.958 19:19:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:35.958 19:19:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:35.958 19:19:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:35.958 19:19:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:35.958 19:19:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:35.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:35.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:18:35.958 00:18:35.958 --- 10.0.0.2 ping statistics --- 00:18:35.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.958 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:18:35.958 19:19:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:35.958 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:35.959 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:18:35.959 00:18:35.959 --- 10.0.0.3 ping statistics --- 00:18:35.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.959 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:18:35.959 19:19:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:35.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:35.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:18:35.959 00:18:35.959 --- 10.0.0.1 ping statistics --- 00:18:35.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.959 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:18:35.959 19:19:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:35.959 19:19:13 -- nvmf/common.sh@421 -- # return 0 00:18:35.959 19:19:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:35.959 19:19:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:35.959 19:19:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:35.959 19:19:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:35.959 19:19:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:35.959 19:19:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:35.959 19:19:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:35.959 19:19:13 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:18:35.959 19:19:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:35.959 19:19:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:35.959 19:19:13 -- common/autotest_common.sh@10 -- # set +x 00:18:35.959 19:19:13 -- nvmf/common.sh@469 -- # nvmfpid=79665 00:18:35.959 19:19:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:35.959 19:19:13 -- nvmf/common.sh@470 -- # waitforlisten 79665 00:18:35.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.959 19:19:13 -- common/autotest_common.sh@817 -- # '[' -z 79665 ']' 00:18:35.959 19:19:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.959 19:19:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:35.959 19:19:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.959 19:19:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:35.959 19:19:13 -- common/autotest_common.sh@10 -- # set +x 00:18:35.959 [2024-02-14 19:19:13.334585] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:18:35.959 [2024-02-14 19:19:13.334663] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.218 [2024-02-14 19:19:13.469180] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:36.218 [2024-02-14 19:19:13.570241] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:36.218 [2024-02-14 19:19:13.570665] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.218 [2024-02-14 19:19:13.570811] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.218 [2024-02-14 19:19:13.570941] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.218 [2024-02-14 19:19:13.571165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.218 [2024-02-14 19:19:13.571405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.218 [2024-02-14 19:19:13.571561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:36.218 [2024-02-14 19:19:13.571565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.188 19:19:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:37.188 19:19:14 -- common/autotest_common.sh@850 -- # return 0 00:18:37.188 19:19:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:37.188 19:19:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:37.188 19:19:14 -- common/autotest_common.sh@10 -- # set +x 00:18:37.188 19:19:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.188 19:19:14 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:37.188 19:19:14 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:37.188 19:19:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:37.188 19:19:14 -- common/autotest_common.sh@10 -- # set +x 00:18:37.188 Malloc0 00:18:37.188 19:19:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:37.188 19:19:14 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:18:37.188 19:19:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:37.188 19:19:14 -- common/autotest_common.sh@10 -- # set +x 00:18:37.188 Delay0 00:18:37.188 19:19:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:37.188 19:19:14 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:37.188 19:19:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:37.188 19:19:14 -- common/autotest_common.sh@10 -- # set +x 00:18:37.188 [2024-02-14 19:19:14.423256] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.188 19:19:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:37.188 19:19:14 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:37.188 19:19:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:37.188 19:19:14 -- common/autotest_common.sh@10 -- # set +x 00:18:37.188 19:19:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:37.188 19:19:14 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:37.188 19:19:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:37.188 19:19:14 -- common/autotest_common.sh@10 -- # set +x 00:18:37.188 19:19:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:37.188 19:19:14 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:37.188 19:19:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:37.188 19:19:14 -- common/autotest_common.sh@10 -- # set +x 00:18:37.188 [2024-02-14 19:19:14.451430] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.188 19:19:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:37.188 19:19:14 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:37.450 19:19:14 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:18:37.450 19:19:14 -- common/autotest_common.sh@1175 -- # local i=0 00:18:37.450 19:19:14 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:18:37.450 19:19:14 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:18:37.450 19:19:14 -- common/autotest_common.sh@1182 -- # sleep 2 00:18:39.357 19:19:16 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:18:39.357 19:19:16 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:18:39.357 19:19:16 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:18:39.357 19:19:16 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:18:39.357 19:19:16 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:18:39.357 19:19:16 -- common/autotest_common.sh@1185 -- # return 0 00:18:39.357 19:19:16 -- target/initiator_timeout.sh@35 -- # fio_pid=79747 00:18:39.357 19:19:16 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:18:39.357 19:19:16 -- target/initiator_timeout.sh@37 -- # sleep 3 00:18:39.357 [global] 00:18:39.357 thread=1 00:18:39.357 invalidate=1 00:18:39.357 rw=write 00:18:39.357 time_based=1 00:18:39.357 runtime=60 00:18:39.357 ioengine=libaio 00:18:39.357 direct=1 00:18:39.357 bs=4096 00:18:39.357 iodepth=1 00:18:39.357 norandommap=0 00:18:39.357 numjobs=1 00:18:39.357 00:18:39.357 verify_dump=1 00:18:39.357 verify_backlog=512 00:18:39.357 verify_state_save=0 00:18:39.357 do_verify=1 00:18:39.357 verify=crc32c-intel 00:18:39.357 [job0] 00:18:39.357 filename=/dev/nvme0n1 00:18:39.357 Could not set queue depth (nvme0n1) 00:18:39.615 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:39.615 fio-3.35 00:18:39.615 Starting 1 thread 00:18:42.898 19:19:19 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:18:42.898 19:19:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.898 19:19:19 -- common/autotest_common.sh@10 -- # set +x 00:18:42.898 true 00:18:42.898 19:19:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:42.898 19:19:19 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:18:42.898 19:19:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.898 19:19:19 -- common/autotest_common.sh@10 -- # set +x 00:18:42.898 true 00:18:42.898 19:19:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:42.898 19:19:19 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:18:42.898 19:19:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.898 19:19:19 -- common/autotest_common.sh@10 -- # set +x 00:18:42.898 true 00:18:42.898 19:19:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:42.898 19:19:19 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:18:42.898 19:19:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.898 19:19:19 -- common/autotest_common.sh@10 -- # set +x 00:18:42.898 true 00:18:42.898 19:19:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:42.898 19:19:19 -- target/initiator_timeout.sh@45 -- # sleep 3 00:18:45.430 19:19:22 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:18:45.430 19:19:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.430 19:19:22 -- common/autotest_common.sh@10 -- # set +x 00:18:45.430 true 00:18:45.430 19:19:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.430 19:19:22 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:18:45.430 19:19:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.430 19:19:22 -- common/autotest_common.sh@10 -- # set +x 00:18:45.430 true 00:18:45.430 19:19:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.430 19:19:22 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:18:45.430 19:19:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.430 19:19:22 -- common/autotest_common.sh@10 -- # set +x 00:18:45.430 true 00:18:45.430 19:19:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.430 19:19:22 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:18:45.430 19:19:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.430 19:19:22 -- common/autotest_common.sh@10 -- # set +x 00:18:45.430 true 00:18:45.430 19:19:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.430 19:19:22 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:18:45.430 19:19:22 -- target/initiator_timeout.sh@54 -- # wait 79747 00:19:41.672 00:19:41.672 job0: (groupid=0, jobs=1): err= 0: pid=79768: Wed Feb 14 19:20:16 2024 00:19:41.672 read: IOPS=789, BW=3160KiB/s (3236kB/s)(185MiB/60000msec) 00:19:41.672 slat (usec): min=11, max=12771, avg=15.71, stdev=71.24 00:19:41.672 clat (usec): min=154, max=40648k, avg=1064.03, stdev=186706.00 00:19:41.672 lat (usec): min=168, max=40648k, avg=1079.74, stdev=186706.01 00:19:41.672 clat percentiles (usec): 00:19:41.672 | 1.00th=[ 167], 5.00th=[ 180], 10.00th=[ 190], 20.00th=[ 196], 00:19:41.672 | 30.00th=[ 200], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 208], 00:19:41.672 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 227], 95.00th=[ 235], 00:19:41.672 | 99.00th=[ 251], 99.50th=[ 258], 99.90th=[ 289], 99.95th=[ 334], 00:19:41.672 | 99.99th=[ 529] 00:19:41.672 write: IOPS=793, BW=3174KiB/s (3251kB/s)(186MiB/60000msec); 0 zone resets 00:19:41.672 slat (usec): min=18, max=575, avg=22.57, stdev= 6.56 00:19:41.672 clat (usec): min=121, max=7364, avg=159.61, stdev=44.93 00:19:41.672 lat (usec): min=140, max=7393, avg=182.18, stdev=45.58 00:19:41.672 clat percentiles (usec): 00:19:41.672 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:19:41.672 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 161], 00:19:41.673 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 184], 00:19:41.673 | 99.00th=[ 200], 99.50th=[ 206], 99.90th=[ 225], 99.95th=[ 253], 00:19:41.673 | 99.99th=[ 1663] 00:19:41.673 bw ( KiB/s): min= 2600, max=12288, per=100.00%, avg=9556.18, stdev=1922.10, samples=39 00:19:41.673 iops : min= 650, max= 3072, avg=2389.03, stdev=480.51, samples=39 00:19:41.673 lat (usec) : 250=99.47%, 500=0.52%, 750=0.01%, 1000=0.01% 00:19:41.673 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:19:41.673 cpu : usr=0.52%, sys=2.15%, ctx=95034, majf=0, minf=2 00:19:41.673 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:41.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.673 issued rwts: total=47397,47616,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.673 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:41.673 00:19:41.673 Run status group 0 (all jobs): 00:19:41.673 READ: bw=3160KiB/s (3236kB/s), 3160KiB/s-3160KiB/s (3236kB/s-3236kB/s), io=185MiB (194MB), run=60000-60000msec 00:19:41.673 WRITE: bw=3174KiB/s (3251kB/s), 3174KiB/s-3174KiB/s (3251kB/s-3251kB/s), io=186MiB (195MB), run=60000-60000msec 00:19:41.673 00:19:41.673 Disk stats (read/write): 00:19:41.673 nvme0n1: ios=47348/47493, merge=0/0, ticks=10154/8102, in_queue=18256, util=99.91% 00:19:41.673 19:20:16 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:41.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:41.673 19:20:16 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:41.673 19:20:16 -- common/autotest_common.sh@1196 -- # local i=0 00:19:41.673 19:20:16 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:19:41.673 19:20:16 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:41.673 19:20:16 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:19:41.673 19:20:16 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:41.673 19:20:17 -- common/autotest_common.sh@1208 -- # return 0 00:19:41.673 19:20:17 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:19:41.673 19:20:17 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:19:41.673 nvmf hotplug test: fio successful as expected 00:19:41.673 19:20:17 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:41.673 19:20:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.673 19:20:17 -- common/autotest_common.sh@10 -- # set +x 00:19:41.673 19:20:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.673 19:20:17 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:19:41.673 19:20:17 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:19:41.673 19:20:17 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:19:41.673 19:20:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:41.673 19:20:17 -- nvmf/common.sh@116 -- # sync 00:19:41.673 19:20:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:41.673 19:20:17 -- nvmf/common.sh@119 -- # set +e 00:19:41.673 19:20:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:41.673 19:20:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:41.673 rmmod nvme_tcp 00:19:41.673 rmmod nvme_fabrics 00:19:41.673 rmmod nvme_keyring 00:19:41.673 19:20:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:41.673 19:20:17 -- nvmf/common.sh@123 -- # set -e 00:19:41.673 19:20:17 -- nvmf/common.sh@124 -- # return 0 00:19:41.673 19:20:17 -- nvmf/common.sh@477 -- # '[' -n 79665 ']' 00:19:41.673 19:20:17 -- nvmf/common.sh@478 -- # killprocess 79665 00:19:41.673 19:20:17 -- common/autotest_common.sh@924 -- # '[' -z 79665 ']' 00:19:41.673 19:20:17 -- common/autotest_common.sh@928 -- # kill -0 79665 00:19:41.673 19:20:17 -- common/autotest_common.sh@929 -- # uname 00:19:41.673 19:20:17 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:41.673 19:20:17 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 79665 00:19:41.673 killing process with pid 79665 00:19:41.673 19:20:17 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:19:41.673 19:20:17 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:19:41.673 19:20:17 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 79665' 00:19:41.673 19:20:17 -- common/autotest_common.sh@943 -- # kill 79665 00:19:41.673 19:20:17 -- common/autotest_common.sh@948 -- # wait 79665 00:19:41.673 19:20:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:41.673 19:20:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:41.673 19:20:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:41.673 19:20:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:41.673 19:20:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:41.673 19:20:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.673 19:20:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.673 19:20:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.673 19:20:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:41.673 ************************************ 00:19:41.673 END TEST nvmf_initiator_timeout 00:19:41.673 ************************************ 00:19:41.673 00:19:41.673 real 1m4.657s 00:19:41.673 user 4m5.994s 00:19:41.673 sys 0m8.772s 00:19:41.673 19:20:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:41.673 19:20:17 -- common/autotest_common.sh@10 -- # set +x 00:19:41.673 19:20:17 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:19:41.673 19:20:17 -- nvmf/nvmf.sh@85 -- # timing_exit target 00:19:41.673 19:20:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:41.673 19:20:17 -- common/autotest_common.sh@10 -- # set +x 00:19:41.673 19:20:17 -- nvmf/nvmf.sh@87 -- # timing_enter host 00:19:41.673 19:20:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:41.673 19:20:17 -- common/autotest_common.sh@10 -- # set +x 00:19:41.673 19:20:17 -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:19:41.673 19:20:17 -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:41.673 19:20:17 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:19:41.673 19:20:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:19:41.673 19:20:17 -- common/autotest_common.sh@10 -- # set +x 00:19:41.673 ************************************ 00:19:41.673 START TEST nvmf_multicontroller 00:19:41.673 ************************************ 00:19:41.673 19:20:17 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:41.673 * Looking for test storage... 00:19:41.673 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:41.673 19:20:17 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:41.673 19:20:17 -- nvmf/common.sh@7 -- # uname -s 00:19:41.673 19:20:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.673 19:20:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.673 19:20:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.673 19:20:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.673 19:20:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:41.673 19:20:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:41.673 19:20:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.673 19:20:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:41.673 19:20:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.673 19:20:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:41.673 19:20:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:19:41.673 19:20:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:19:41.673 19:20:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.673 19:20:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:41.673 19:20:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:41.673 19:20:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:41.673 19:20:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.673 19:20:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.673 19:20:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.673 19:20:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.673 19:20:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.673 19:20:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.673 19:20:17 -- paths/export.sh@5 -- # export PATH 00:19:41.673 19:20:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.673 19:20:17 -- nvmf/common.sh@46 -- # : 0 00:19:41.673 19:20:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:41.673 19:20:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:41.673 19:20:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:41.673 19:20:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.673 19:20:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.673 19:20:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:41.673 19:20:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:41.673 19:20:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:41.673 19:20:17 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:41.673 19:20:17 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:41.674 19:20:17 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:41.674 19:20:17 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:41.674 19:20:17 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:41.674 19:20:17 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:19:41.674 19:20:17 -- host/multicontroller.sh@23 -- # nvmftestinit 00:19:41.674 19:20:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:41.674 19:20:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.674 19:20:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:41.674 19:20:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:41.674 19:20:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:41.674 19:20:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.674 19:20:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.674 19:20:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.674 19:20:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:41.674 19:20:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:41.674 19:20:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:41.674 19:20:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:41.674 19:20:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:41.674 19:20:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:41.674 19:20:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:41.674 19:20:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:41.674 19:20:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:41.674 19:20:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:41.674 19:20:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:41.674 19:20:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:41.674 19:20:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:41.674 19:20:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:41.674 19:20:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:41.674 19:20:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:41.674 19:20:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:41.674 19:20:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:41.674 19:20:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:41.674 19:20:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:41.674 Cannot find device "nvmf_tgt_br" 00:19:41.674 19:20:17 -- nvmf/common.sh@154 -- # true 00:19:41.674 19:20:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:41.674 Cannot find device "nvmf_tgt_br2" 00:19:41.674 19:20:17 -- nvmf/common.sh@155 -- # true 00:19:41.674 19:20:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:41.674 19:20:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:41.674 Cannot find device "nvmf_tgt_br" 00:19:41.674 19:20:17 -- nvmf/common.sh@157 -- # true 00:19:41.674 19:20:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:41.674 Cannot find device "nvmf_tgt_br2" 00:19:41.674 19:20:17 -- nvmf/common.sh@158 -- # true 00:19:41.674 19:20:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:41.674 19:20:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:41.674 19:20:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:41.674 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:41.674 19:20:17 -- nvmf/common.sh@161 -- # true 00:19:41.674 19:20:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:41.674 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:41.674 19:20:17 -- nvmf/common.sh@162 -- # true 00:19:41.674 19:20:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:41.674 19:20:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:41.674 19:20:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:41.674 19:20:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:41.674 19:20:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:41.674 19:20:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:41.674 19:20:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:41.674 19:20:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:41.674 19:20:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:41.674 19:20:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:41.674 19:20:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:41.674 19:20:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:41.674 19:20:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:41.674 19:20:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:41.674 19:20:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:41.674 19:20:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:41.674 19:20:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:41.674 19:20:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:41.674 19:20:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:41.674 19:20:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:41.674 19:20:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:41.674 19:20:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:41.674 19:20:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:41.674 19:20:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:41.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:41.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:19:41.674 00:19:41.674 --- 10.0.0.2 ping statistics --- 00:19:41.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.674 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:41.674 19:20:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:41.674 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:41.674 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:19:41.674 00:19:41.674 --- 10.0.0.3 ping statistics --- 00:19:41.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.674 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:19:41.674 19:20:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:41.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:41.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:19:41.674 00:19:41.674 --- 10.0.0.1 ping statistics --- 00:19:41.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.674 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:19:41.674 19:20:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:41.674 19:20:17 -- nvmf/common.sh@421 -- # return 0 00:19:41.674 19:20:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:41.674 19:20:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:41.674 19:20:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:41.674 19:20:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:41.674 19:20:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:41.674 19:20:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:41.674 19:20:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:41.674 19:20:18 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:19:41.674 19:20:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:41.674 19:20:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:41.674 19:20:18 -- common/autotest_common.sh@10 -- # set +x 00:19:41.674 19:20:18 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:41.674 19:20:18 -- nvmf/common.sh@469 -- # nvmfpid=80599 00:19:41.674 19:20:18 -- nvmf/common.sh@470 -- # waitforlisten 80599 00:19:41.674 19:20:18 -- common/autotest_common.sh@817 -- # '[' -z 80599 ']' 00:19:41.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.674 19:20:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.674 19:20:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:41.674 19:20:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.674 19:20:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:41.674 19:20:18 -- common/autotest_common.sh@10 -- # set +x 00:19:41.674 [2024-02-14 19:20:18.071217] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:19:41.674 [2024-02-14 19:20:18.071292] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.674 [2024-02-14 19:20:18.204557] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:41.674 [2024-02-14 19:20:18.284548] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:41.674 [2024-02-14 19:20:18.284689] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:41.674 [2024-02-14 19:20:18.284726] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:41.674 [2024-02-14 19:20:18.284734] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:41.674 [2024-02-14 19:20:18.285272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.674 [2024-02-14 19:20:18.285459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:41.674 [2024-02-14 19:20:18.285466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.674 19:20:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:41.674 19:20:19 -- common/autotest_common.sh@850 -- # return 0 00:19:41.674 19:20:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:41.674 19:20:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:41.674 19:20:19 -- common/autotest_common.sh@10 -- # set +x 00:19:41.934 19:20:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.934 19:20:19 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:41.934 19:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.934 19:20:19 -- common/autotest_common.sh@10 -- # set +x 00:19:41.934 [2024-02-14 19:20:19.099168] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.934 19:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.934 19:20:19 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:41.934 19:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.934 19:20:19 -- common/autotest_common.sh@10 -- # set +x 00:19:41.934 Malloc0 00:19:41.934 19:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.934 19:20:19 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:41.934 19:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.934 19:20:19 -- common/autotest_common.sh@10 -- # set +x 00:19:41.934 19:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.934 19:20:19 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:41.934 19:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.934 19:20:19 -- common/autotest_common.sh@10 -- # set +x 00:19:41.934 19:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.934 19:20:19 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:41.934 19:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.934 19:20:19 -- common/autotest_common.sh@10 -- # set +x 00:19:41.934 [2024-02-14 19:20:19.164120] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.934 19:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.934 19:20:19 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:41.934 19:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.934 19:20:19 -- common/autotest_common.sh@10 -- # set +x 00:19:41.934 [2024-02-14 19:20:19.172030] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:41.934 19:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.934 19:20:19 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:41.934 19:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.934 19:20:19 -- common/autotest_common.sh@10 -- # set +x 00:19:41.934 Malloc1 00:19:41.934 19:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.934 19:20:19 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:41.934 19:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.934 19:20:19 -- common/autotest_common.sh@10 -- # set +x 00:19:41.934 19:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.934 19:20:19 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:19:41.934 19:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.934 19:20:19 -- common/autotest_common.sh@10 -- # set +x 00:19:41.934 19:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.934 19:20:19 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:41.934 19:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.934 19:20:19 -- common/autotest_common.sh@10 -- # set +x 00:19:41.934 19:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.934 19:20:19 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:19:41.934 19:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.934 19:20:19 -- common/autotest_common.sh@10 -- # set +x 00:19:41.934 19:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.934 19:20:19 -- host/multicontroller.sh@44 -- # bdevperf_pid=80651 00:19:41.934 19:20:19 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:19:41.934 19:20:19 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:41.934 19:20:19 -- host/multicontroller.sh@47 -- # waitforlisten 80651 /var/tmp/bdevperf.sock 00:19:41.934 19:20:19 -- common/autotest_common.sh@817 -- # '[' -z 80651 ']' 00:19:41.935 19:20:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:41.935 19:20:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:41.935 19:20:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:41.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:41.935 19:20:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:41.935 19:20:19 -- common/autotest_common.sh@10 -- # set +x 00:19:42.869 19:20:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:42.869 19:20:20 -- common/autotest_common.sh@850 -- # return 0 00:19:42.869 19:20:20 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:42.869 19:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.869 19:20:20 -- common/autotest_common.sh@10 -- # set +x 00:19:43.128 NVMe0n1 00:19:43.128 19:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.128 19:20:20 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:43.128 19:20:20 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:19:43.128 19:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.128 19:20:20 -- common/autotest_common.sh@10 -- # set +x 00:19:43.128 19:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.128 1 00:19:43.128 19:20:20 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:43.128 19:20:20 -- common/autotest_common.sh@638 -- # local es=0 00:19:43.128 19:20:20 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:43.128 19:20:20 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:43.128 19:20:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:43.128 19:20:20 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:43.128 19:20:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:43.128 19:20:20 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:43.128 19:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.128 19:20:20 -- common/autotest_common.sh@10 -- # set +x 00:19:43.128 2024/02/14 19:20:20 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:43.128 request: 00:19:43.128 { 00:19:43.128 "method": "bdev_nvme_attach_controller", 00:19:43.128 "params": { 00:19:43.128 "name": "NVMe0", 00:19:43.128 "trtype": "tcp", 00:19:43.128 "traddr": "10.0.0.2", 00:19:43.128 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:19:43.128 "hostaddr": "10.0.0.2", 00:19:43.128 "hostsvcid": "60000", 00:19:43.128 "adrfam": "ipv4", 00:19:43.129 "trsvcid": "4420", 00:19:43.129 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:19:43.129 } 00:19:43.129 } 00:19:43.129 Got JSON-RPC error response 00:19:43.129 GoRPCClient: error on JSON-RPC call 00:19:43.129 19:20:20 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:43.129 19:20:20 -- common/autotest_common.sh@641 -- # es=1 00:19:43.129 19:20:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:43.129 19:20:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:43.129 19:20:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:43.129 19:20:20 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:43.129 19:20:20 -- common/autotest_common.sh@638 -- # local es=0 00:19:43.129 19:20:20 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:43.129 19:20:20 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:43.129 19:20:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:43.129 19:20:20 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:43.129 19:20:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:43.129 19:20:20 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:43.129 19:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.129 19:20:20 -- common/autotest_common.sh@10 -- # set +x 00:19:43.129 2024/02/14 19:20:20 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:43.129 request: 00:19:43.129 { 00:19:43.129 "method": "bdev_nvme_attach_controller", 00:19:43.129 "params": { 00:19:43.129 "name": "NVMe0", 00:19:43.129 "trtype": "tcp", 00:19:43.129 "traddr": "10.0.0.2", 00:19:43.129 "hostaddr": "10.0.0.2", 00:19:43.129 "hostsvcid": "60000", 00:19:43.129 "adrfam": "ipv4", 00:19:43.129 "trsvcid": "4420", 00:19:43.129 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:19:43.129 } 00:19:43.129 } 00:19:43.129 Got JSON-RPC error response 00:19:43.129 GoRPCClient: error on JSON-RPC call 00:19:43.129 19:20:20 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:43.129 19:20:20 -- common/autotest_common.sh@641 -- # es=1 00:19:43.129 19:20:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:43.129 19:20:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:43.129 19:20:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:43.129 19:20:20 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:43.129 19:20:20 -- common/autotest_common.sh@638 -- # local es=0 00:19:43.129 19:20:20 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:43.129 19:20:20 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:43.129 19:20:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:43.129 19:20:20 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:43.129 19:20:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:43.129 19:20:20 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:43.129 19:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.129 19:20:20 -- common/autotest_common.sh@10 -- # set +x 00:19:43.129 2024/02/14 19:20:20 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:19:43.129 request: 00:19:43.129 { 00:19:43.129 "method": "bdev_nvme_attach_controller", 00:19:43.129 "params": { 00:19:43.129 "name": "NVMe0", 00:19:43.129 "trtype": "tcp", 00:19:43.129 "traddr": "10.0.0.2", 00:19:43.129 "hostaddr": "10.0.0.2", 00:19:43.129 "hostsvcid": "60000", 00:19:43.129 "adrfam": "ipv4", 00:19:43.129 "trsvcid": "4420", 00:19:43.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.129 "multipath": "disable" 00:19:43.129 } 00:19:43.129 } 00:19:43.129 Got JSON-RPC error response 00:19:43.129 GoRPCClient: error on JSON-RPC call 00:19:43.129 19:20:20 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:43.129 19:20:20 -- common/autotest_common.sh@641 -- # es=1 00:19:43.129 19:20:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:43.129 19:20:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:43.129 19:20:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:43.129 19:20:20 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:43.129 19:20:20 -- common/autotest_common.sh@638 -- # local es=0 00:19:43.129 19:20:20 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:43.129 19:20:20 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:43.129 19:20:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:43.129 19:20:20 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:43.129 19:20:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:43.129 19:20:20 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:43.129 19:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.129 19:20:20 -- common/autotest_common.sh@10 -- # set +x 00:19:43.129 2024/02/14 19:20:20 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:43.129 request: 00:19:43.129 { 00:19:43.129 "method": "bdev_nvme_attach_controller", 00:19:43.129 "params": { 00:19:43.129 "name": "NVMe0", 00:19:43.129 "trtype": "tcp", 00:19:43.129 "traddr": "10.0.0.2", 00:19:43.129 "hostaddr": "10.0.0.2", 00:19:43.129 "hostsvcid": "60000", 00:19:43.129 "adrfam": "ipv4", 00:19:43.129 "trsvcid": "4420", 00:19:43.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.129 "multipath": "failover" 00:19:43.129 } 00:19:43.129 } 00:19:43.129 Got JSON-RPC error response 00:19:43.129 GoRPCClient: error on JSON-RPC call 00:19:43.129 19:20:20 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:43.129 19:20:20 -- common/autotest_common.sh@641 -- # es=1 00:19:43.129 19:20:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:43.129 19:20:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:43.129 19:20:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:43.129 19:20:20 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:43.129 19:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.129 19:20:20 -- common/autotest_common.sh@10 -- # set +x 00:19:43.129 00:19:43.129 19:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.129 19:20:20 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:43.129 19:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.129 19:20:20 -- common/autotest_common.sh@10 -- # set +x 00:19:43.129 19:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.129 19:20:20 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:43.129 19:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.129 19:20:20 -- common/autotest_common.sh@10 -- # set +x 00:19:43.129 00:19:43.129 19:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.129 19:20:20 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:43.129 19:20:20 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:19:43.129 19:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.129 19:20:20 -- common/autotest_common.sh@10 -- # set +x 00:19:43.129 19:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.387 19:20:20 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:19:43.387 19:20:20 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:44.322 0 00:19:44.322 19:20:21 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:19:44.322 19:20:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.322 19:20:21 -- common/autotest_common.sh@10 -- # set +x 00:19:44.322 19:20:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.322 19:20:21 -- host/multicontroller.sh@100 -- # killprocess 80651 00:19:44.322 19:20:21 -- common/autotest_common.sh@924 -- # '[' -z 80651 ']' 00:19:44.322 19:20:21 -- common/autotest_common.sh@928 -- # kill -0 80651 00:19:44.322 19:20:21 -- common/autotest_common.sh@929 -- # uname 00:19:44.322 19:20:21 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:44.322 19:20:21 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 80651 00:19:44.322 killing process with pid 80651 00:19:44.322 19:20:21 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:19:44.322 19:20:21 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:19:44.322 19:20:21 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 80651' 00:19:44.322 19:20:21 -- common/autotest_common.sh@943 -- # kill 80651 00:19:44.322 19:20:21 -- common/autotest_common.sh@948 -- # wait 80651 00:19:44.889 19:20:22 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:44.889 19:20:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.889 19:20:22 -- common/autotest_common.sh@10 -- # set +x 00:19:44.889 19:20:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.889 19:20:22 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:44.889 19:20:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.889 19:20:22 -- common/autotest_common.sh@10 -- # set +x 00:19:44.889 19:20:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.889 19:20:22 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:19:44.889 19:20:22 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:44.889 19:20:22 -- common/autotest_common.sh@1595 -- # read -r file 00:19:44.889 19:20:22 -- common/autotest_common.sh@1594 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:19:44.889 19:20:22 -- common/autotest_common.sh@1594 -- # sort -u 00:19:44.889 19:20:22 -- common/autotest_common.sh@1596 -- # cat 00:19:44.889 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:19:44.889 [2024-02-14 19:20:19.294651] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:19:44.889 [2024-02-14 19:20:19.295315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80651 ] 00:19:44.889 [2024-02-14 19:20:19.436356] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.889 [2024-02-14 19:20:19.525823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.889 [2024-02-14 19:20:20.519283] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 440b7b5e-3941-4cb6-82cc-f6f1627c7127 already exists 00:19:44.889 [2024-02-14 19:20:20.519351] bdev.c:7598:bdev_register: *ERROR*: Unable to add uuid:440b7b5e-3941-4cb6-82cc-f6f1627c7127 alias for bdev NVMe1n1 00:19:44.889 [2024-02-14 19:20:20.519369] bdev_nvme.c:4183:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:19:44.889 Running I/O for 1 seconds... 00:19:44.889 00:19:44.889 Latency(us) 00:19:44.889 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.889 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:19:44.889 NVMe0n1 : 1.00 23886.87 93.31 0.00 0.00 5346.27 2934.23 10724.07 00:19:44.889 =================================================================================================================== 00:19:44.889 Total : 23886.87 93.31 0.00 0.00 5346.27 2934.23 10724.07 00:19:44.889 Received shutdown signal, test time was about 1.000000 seconds 00:19:44.889 00:19:44.889 Latency(us) 00:19:44.889 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.889 =================================================================================================================== 00:19:44.889 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:44.889 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:19:44.889 19:20:22 -- common/autotest_common.sh@1601 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:44.889 19:20:22 -- common/autotest_common.sh@1595 -- # read -r file 00:19:44.889 19:20:22 -- host/multicontroller.sh@108 -- # nvmftestfini 00:19:44.889 19:20:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:44.889 19:20:22 -- nvmf/common.sh@116 -- # sync 00:19:44.889 19:20:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:44.889 19:20:22 -- nvmf/common.sh@119 -- # set +e 00:19:44.889 19:20:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:44.889 19:20:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:44.889 rmmod nvme_tcp 00:19:44.889 rmmod nvme_fabrics 00:19:44.889 rmmod nvme_keyring 00:19:44.889 19:20:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:44.889 19:20:22 -- nvmf/common.sh@123 -- # set -e 00:19:44.889 19:20:22 -- nvmf/common.sh@124 -- # return 0 00:19:44.889 19:20:22 -- nvmf/common.sh@477 -- # '[' -n 80599 ']' 00:19:44.889 19:20:22 -- nvmf/common.sh@478 -- # killprocess 80599 00:19:44.889 19:20:22 -- common/autotest_common.sh@924 -- # '[' -z 80599 ']' 00:19:44.889 19:20:22 -- common/autotest_common.sh@928 -- # kill -0 80599 00:19:44.889 19:20:22 -- common/autotest_common.sh@929 -- # uname 00:19:44.889 19:20:22 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:44.889 19:20:22 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 80599 00:19:44.889 killing process with pid 80599 00:19:44.889 19:20:22 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:19:44.889 19:20:22 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:19:44.889 19:20:22 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 80599' 00:19:44.889 19:20:22 -- common/autotest_common.sh@943 -- # kill 80599 00:19:44.889 19:20:22 -- common/autotest_common.sh@948 -- # wait 80599 00:19:45.149 19:20:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:45.149 19:20:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:45.149 19:20:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:45.149 19:20:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:45.149 19:20:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:45.149 19:20:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.149 19:20:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.149 19:20:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.149 19:20:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:45.149 00:19:45.149 real 0m4.919s 00:19:45.149 user 0m15.476s 00:19:45.149 sys 0m1.128s 00:19:45.149 19:20:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:45.149 ************************************ 00:19:45.149 END TEST nvmf_multicontroller 00:19:45.149 ************************************ 00:19:45.149 19:20:22 -- common/autotest_common.sh@10 -- # set +x 00:19:45.149 19:20:22 -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:45.149 19:20:22 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:19:45.149 19:20:22 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:19:45.149 19:20:22 -- common/autotest_common.sh@10 -- # set +x 00:19:45.149 ************************************ 00:19:45.149 START TEST nvmf_aer 00:19:45.149 ************************************ 00:19:45.149 19:20:22 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:45.408 * Looking for test storage... 00:19:45.408 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:45.408 19:20:22 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:45.408 19:20:22 -- nvmf/common.sh@7 -- # uname -s 00:19:45.408 19:20:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.408 19:20:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.408 19:20:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.408 19:20:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.408 19:20:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:45.408 19:20:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:45.408 19:20:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.408 19:20:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:45.408 19:20:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.408 19:20:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:45.408 19:20:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:19:45.408 19:20:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:19:45.408 19:20:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.408 19:20:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:45.408 19:20:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:45.408 19:20:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:45.408 19:20:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.408 19:20:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.408 19:20:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.409 19:20:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.409 19:20:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.409 19:20:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.409 19:20:22 -- paths/export.sh@5 -- # export PATH 00:19:45.409 19:20:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.409 19:20:22 -- nvmf/common.sh@46 -- # : 0 00:19:45.409 19:20:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:45.409 19:20:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:45.409 19:20:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:45.409 19:20:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.409 19:20:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.409 19:20:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:45.409 19:20:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:45.409 19:20:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:45.409 19:20:22 -- host/aer.sh@11 -- # nvmftestinit 00:19:45.409 19:20:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:45.409 19:20:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:45.409 19:20:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:45.409 19:20:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:45.409 19:20:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:45.409 19:20:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.409 19:20:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.409 19:20:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.409 19:20:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:45.409 19:20:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:45.409 19:20:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:45.409 19:20:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:45.409 19:20:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:45.409 19:20:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:45.409 19:20:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:45.409 19:20:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:45.409 19:20:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:45.409 19:20:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:45.409 19:20:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:45.409 19:20:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:45.409 19:20:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:45.409 19:20:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:45.409 19:20:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:45.409 19:20:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:45.409 19:20:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:45.409 19:20:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:45.409 19:20:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:45.409 19:20:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:45.409 Cannot find device "nvmf_tgt_br" 00:19:45.409 19:20:22 -- nvmf/common.sh@154 -- # true 00:19:45.409 19:20:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:45.409 Cannot find device "nvmf_tgt_br2" 00:19:45.409 19:20:22 -- nvmf/common.sh@155 -- # true 00:19:45.409 19:20:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:45.409 19:20:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:45.409 Cannot find device "nvmf_tgt_br" 00:19:45.409 19:20:22 -- nvmf/common.sh@157 -- # true 00:19:45.409 19:20:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:45.409 Cannot find device "nvmf_tgt_br2" 00:19:45.409 19:20:22 -- nvmf/common.sh@158 -- # true 00:19:45.409 19:20:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:45.409 19:20:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:45.409 19:20:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:45.409 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:45.409 19:20:22 -- nvmf/common.sh@161 -- # true 00:19:45.409 19:20:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:45.409 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:45.409 19:20:22 -- nvmf/common.sh@162 -- # true 00:19:45.409 19:20:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:45.409 19:20:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:45.409 19:20:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:45.409 19:20:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:45.409 19:20:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:45.409 19:20:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:45.667 19:20:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:45.667 19:20:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:45.667 19:20:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:45.667 19:20:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:45.667 19:20:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:45.667 19:20:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:45.667 19:20:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:45.667 19:20:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:45.667 19:20:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:45.667 19:20:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:45.667 19:20:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:45.667 19:20:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:45.667 19:20:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:45.667 19:20:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:45.667 19:20:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:45.667 19:20:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:45.667 19:20:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:45.667 19:20:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:45.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:45.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:19:45.667 00:19:45.667 --- 10.0.0.2 ping statistics --- 00:19:45.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.667 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:19:45.667 19:20:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:45.667 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:45.667 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:19:45.667 00:19:45.667 --- 10.0.0.3 ping statistics --- 00:19:45.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.668 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:19:45.668 19:20:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:45.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:45.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:19:45.668 00:19:45.668 --- 10.0.0.1 ping statistics --- 00:19:45.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.668 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:19:45.668 19:20:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:45.668 19:20:22 -- nvmf/common.sh@421 -- # return 0 00:19:45.668 19:20:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:45.668 19:20:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:45.668 19:20:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:45.668 19:20:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:45.668 19:20:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:45.668 19:20:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:45.668 19:20:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:45.668 19:20:22 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:19:45.668 19:20:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:45.668 19:20:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:45.668 19:20:22 -- common/autotest_common.sh@10 -- # set +x 00:19:45.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.668 19:20:22 -- nvmf/common.sh@469 -- # nvmfpid=80898 00:19:45.668 19:20:22 -- nvmf/common.sh@470 -- # waitforlisten 80898 00:19:45.668 19:20:22 -- common/autotest_common.sh@817 -- # '[' -z 80898 ']' 00:19:45.668 19:20:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:45.668 19:20:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.668 19:20:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:45.668 19:20:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.668 19:20:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:45.668 19:20:22 -- common/autotest_common.sh@10 -- # set +x 00:19:45.668 [2024-02-14 19:20:23.030641] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:19:45.668 [2024-02-14 19:20:23.030735] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:45.925 [2024-02-14 19:20:23.163087] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:45.926 [2024-02-14 19:20:23.248401] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:45.926 [2024-02-14 19:20:23.248658] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:45.926 [2024-02-14 19:20:23.248676] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:45.926 [2024-02-14 19:20:23.248684] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:45.926 [2024-02-14 19:20:23.248790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.926 [2024-02-14 19:20:23.249436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.926 [2024-02-14 19:20:23.249574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:45.926 [2024-02-14 19:20:23.249585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.490 19:20:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:46.490 19:20:23 -- common/autotest_common.sh@850 -- # return 0 00:19:46.490 19:20:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:46.490 19:20:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:46.490 19:20:23 -- common/autotest_common.sh@10 -- # set +x 00:19:46.749 19:20:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.749 19:20:23 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:46.749 19:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.749 19:20:23 -- common/autotest_common.sh@10 -- # set +x 00:19:46.749 [2024-02-14 19:20:23.943444] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.749 19:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.749 19:20:23 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:19:46.749 19:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.749 19:20:23 -- common/autotest_common.sh@10 -- # set +x 00:19:46.749 Malloc0 00:19:46.749 19:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.749 19:20:23 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:19:46.749 19:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.749 19:20:23 -- common/autotest_common.sh@10 -- # set +x 00:19:46.749 19:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.749 19:20:24 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:46.749 19:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.749 19:20:24 -- common/autotest_common.sh@10 -- # set +x 00:19:46.749 19:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.749 19:20:24 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:46.749 19:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.749 19:20:24 -- common/autotest_common.sh@10 -- # set +x 00:19:46.749 [2024-02-14 19:20:24.016508] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.749 19:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.749 19:20:24 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:19:46.749 19:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.749 19:20:24 -- common/autotest_common.sh@10 -- # set +x 00:19:46.749 [2024-02-14 19:20:24.024276] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:19:46.749 [ 00:19:46.749 { 00:19:46.749 "allow_any_host": true, 00:19:46.749 "hosts": [], 00:19:46.749 "listen_addresses": [], 00:19:46.749 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:46.749 "subtype": "Discovery" 00:19:46.749 }, 00:19:46.749 { 00:19:46.749 "allow_any_host": true, 00:19:46.749 "hosts": [], 00:19:46.749 "listen_addresses": [ 00:19:46.749 { 00:19:46.749 "adrfam": "IPv4", 00:19:46.749 "traddr": "10.0.0.2", 00:19:46.749 "transport": "TCP", 00:19:46.749 "trsvcid": "4420", 00:19:46.749 "trtype": "TCP" 00:19:46.749 } 00:19:46.749 ], 00:19:46.749 "max_cntlid": 65519, 00:19:46.749 "max_namespaces": 2, 00:19:46.749 "min_cntlid": 1, 00:19:46.749 "model_number": "SPDK bdev Controller", 00:19:46.749 "namespaces": [ 00:19:46.749 { 00:19:46.749 "bdev_name": "Malloc0", 00:19:46.749 "name": "Malloc0", 00:19:46.749 "nguid": "87F20CF290004B398B971746BFCC32D4", 00:19:46.749 "nsid": 1, 00:19:46.749 "uuid": "87f20cf2-9000-4b39-8b97-1746bfcc32d4" 00:19:46.749 } 00:19:46.749 ], 00:19:46.749 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.749 "serial_number": "SPDK00000000000001", 00:19:46.749 "subtype": "NVMe" 00:19:46.749 } 00:19:46.749 ] 00:19:46.749 19:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.749 19:20:24 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:46.749 19:20:24 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:19:46.749 19:20:24 -- host/aer.sh@33 -- # aerpid=80953 00:19:46.749 19:20:24 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:19:46.749 19:20:24 -- common/autotest_common.sh@1242 -- # local i=0 00:19:46.749 19:20:24 -- common/autotest_common.sh@1243 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:46.749 19:20:24 -- common/autotest_common.sh@1244 -- # '[' 0 -lt 200 ']' 00:19:46.749 19:20:24 -- common/autotest_common.sh@1245 -- # i=1 00:19:46.749 19:20:24 -- common/autotest_common.sh@1246 -- # sleep 0.1 00:19:46.749 19:20:24 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:19:46.749 19:20:24 -- common/autotest_common.sh@1243 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:46.749 19:20:24 -- common/autotest_common.sh@1244 -- # '[' 1 -lt 200 ']' 00:19:46.749 19:20:24 -- common/autotest_common.sh@1245 -- # i=2 00:19:46.749 19:20:24 -- common/autotest_common.sh@1246 -- # sleep 0.1 00:19:47.007 19:20:24 -- common/autotest_common.sh@1243 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:47.007 19:20:24 -- common/autotest_common.sh@1249 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:47.007 19:20:24 -- common/autotest_common.sh@1253 -- # return 0 00:19:47.007 19:20:24 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:19:47.007 19:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.007 19:20:24 -- common/autotest_common.sh@10 -- # set +x 00:19:47.007 Malloc1 00:19:47.007 19:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.007 19:20:24 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:19:47.007 19:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.007 19:20:24 -- common/autotest_common.sh@10 -- # set +x 00:19:47.007 19:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.007 19:20:24 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:19:47.007 19:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.007 19:20:24 -- common/autotest_common.sh@10 -- # set +x 00:19:47.007 Asynchronous Event Request test 00:19:47.007 Attaching to 10.0.0.2 00:19:47.007 Attached to 10.0.0.2 00:19:47.007 Registering asynchronous event callbacks... 00:19:47.007 Starting namespace attribute notice tests for all controllers... 00:19:47.007 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:47.007 aer_cb - Changed Namespace 00:19:47.007 Cleaning up... 00:19:47.007 [ 00:19:47.007 { 00:19:47.007 "allow_any_host": true, 00:19:47.007 "hosts": [], 00:19:47.007 "listen_addresses": [], 00:19:47.007 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:47.007 "subtype": "Discovery" 00:19:47.007 }, 00:19:47.007 { 00:19:47.007 "allow_any_host": true, 00:19:47.007 "hosts": [], 00:19:47.007 "listen_addresses": [ 00:19:47.007 { 00:19:47.007 "adrfam": "IPv4", 00:19:47.007 "traddr": "10.0.0.2", 00:19:47.007 "transport": "TCP", 00:19:47.007 "trsvcid": "4420", 00:19:47.007 "trtype": "TCP" 00:19:47.007 } 00:19:47.007 ], 00:19:47.007 "max_cntlid": 65519, 00:19:47.007 "max_namespaces": 2, 00:19:47.007 "min_cntlid": 1, 00:19:47.008 "model_number": "SPDK bdev Controller", 00:19:47.008 "namespaces": [ 00:19:47.008 { 00:19:47.008 "bdev_name": "Malloc0", 00:19:47.008 "name": "Malloc0", 00:19:47.008 "nguid": "87F20CF290004B398B971746BFCC32D4", 00:19:47.008 "nsid": 1, 00:19:47.008 "uuid": "87f20cf2-9000-4b39-8b97-1746bfcc32d4" 00:19:47.008 }, 00:19:47.008 { 00:19:47.008 "bdev_name": "Malloc1", 00:19:47.008 "name": "Malloc1", 00:19:47.008 "nguid": "0B44DAA9499A41129742A1A49A626058", 00:19:47.008 "nsid": 2, 00:19:47.008 "uuid": "0b44daa9-499a-4112-9742-a1a49a626058" 00:19:47.008 } 00:19:47.008 ], 00:19:47.008 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.008 "serial_number": "SPDK00000000000001", 00:19:47.008 "subtype": "NVMe" 00:19:47.008 } 00:19:47.008 ] 00:19:47.008 19:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.008 19:20:24 -- host/aer.sh@43 -- # wait 80953 00:19:47.008 19:20:24 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:47.008 19:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.008 19:20:24 -- common/autotest_common.sh@10 -- # set +x 00:19:47.008 19:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.008 19:20:24 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:47.008 19:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.008 19:20:24 -- common/autotest_common.sh@10 -- # set +x 00:19:47.266 19:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.266 19:20:24 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:47.266 19:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.266 19:20:24 -- common/autotest_common.sh@10 -- # set +x 00:19:47.266 19:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.266 19:20:24 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:19:47.266 19:20:24 -- host/aer.sh@51 -- # nvmftestfini 00:19:47.266 19:20:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:47.266 19:20:24 -- nvmf/common.sh@116 -- # sync 00:19:47.266 19:20:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:47.266 19:20:24 -- nvmf/common.sh@119 -- # set +e 00:19:47.266 19:20:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:47.266 19:20:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:47.266 rmmod nvme_tcp 00:19:47.266 rmmod nvme_fabrics 00:19:47.266 rmmod nvme_keyring 00:19:47.266 19:20:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:47.266 19:20:24 -- nvmf/common.sh@123 -- # set -e 00:19:47.266 19:20:24 -- nvmf/common.sh@124 -- # return 0 00:19:47.266 19:20:24 -- nvmf/common.sh@477 -- # '[' -n 80898 ']' 00:19:47.266 19:20:24 -- nvmf/common.sh@478 -- # killprocess 80898 00:19:47.266 19:20:24 -- common/autotest_common.sh@924 -- # '[' -z 80898 ']' 00:19:47.266 19:20:24 -- common/autotest_common.sh@928 -- # kill -0 80898 00:19:47.266 19:20:24 -- common/autotest_common.sh@929 -- # uname 00:19:47.266 19:20:24 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:47.266 19:20:24 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 80898 00:19:47.266 killing process with pid 80898 00:19:47.266 19:20:24 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:19:47.266 19:20:24 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:19:47.266 19:20:24 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 80898' 00:19:47.266 19:20:24 -- common/autotest_common.sh@943 -- # kill 80898 00:19:47.266 [2024-02-14 19:20:24.553376] app.c: 881:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:19:47.266 19:20:24 -- common/autotest_common.sh@948 -- # wait 80898 00:19:47.525 19:20:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:47.525 19:20:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:47.525 19:20:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:47.525 19:20:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:47.525 19:20:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:47.525 19:20:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.525 19:20:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.525 19:20:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.525 19:20:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:47.525 ************************************ 00:19:47.525 END TEST nvmf_aer 00:19:47.525 ************************************ 00:19:47.525 00:19:47.525 real 0m2.343s 00:19:47.525 user 0m6.293s 00:19:47.525 sys 0m0.679s 00:19:47.525 19:20:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:47.525 19:20:24 -- common/autotest_common.sh@10 -- # set +x 00:19:47.525 19:20:24 -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:47.525 19:20:24 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:19:47.525 19:20:24 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:19:47.525 19:20:24 -- common/autotest_common.sh@10 -- # set +x 00:19:47.525 ************************************ 00:19:47.525 START TEST nvmf_async_init 00:19:47.525 ************************************ 00:19:47.525 19:20:24 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:47.784 * Looking for test storage... 00:19:47.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:47.784 19:20:25 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:47.784 19:20:25 -- nvmf/common.sh@7 -- # uname -s 00:19:47.784 19:20:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.784 19:20:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.784 19:20:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.784 19:20:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.784 19:20:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.784 19:20:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.784 19:20:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.784 19:20:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.784 19:20:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.785 19:20:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.785 19:20:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:19:47.785 19:20:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:19:47.785 19:20:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.785 19:20:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.785 19:20:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:47.785 19:20:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:47.785 19:20:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.785 19:20:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.785 19:20:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.785 19:20:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.785 19:20:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.785 19:20:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.785 19:20:25 -- paths/export.sh@5 -- # export PATH 00:19:47.785 19:20:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.785 19:20:25 -- nvmf/common.sh@46 -- # : 0 00:19:47.785 19:20:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:47.785 19:20:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:47.785 19:20:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:47.785 19:20:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.785 19:20:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.785 19:20:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:47.785 19:20:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:47.785 19:20:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:47.785 19:20:25 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:19:47.785 19:20:25 -- host/async_init.sh@14 -- # null_block_size=512 00:19:47.785 19:20:25 -- host/async_init.sh@15 -- # null_bdev=null0 00:19:47.785 19:20:25 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:19:47.785 19:20:25 -- host/async_init.sh@20 -- # uuidgen 00:19:47.785 19:20:25 -- host/async_init.sh@20 -- # tr -d - 00:19:47.785 19:20:25 -- host/async_init.sh@20 -- # nguid=34f3cc1f1526420ca1ba9f44560c9270 00:19:47.785 19:20:25 -- host/async_init.sh@22 -- # nvmftestinit 00:19:47.785 19:20:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:47.785 19:20:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.785 19:20:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:47.785 19:20:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:47.785 19:20:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:47.785 19:20:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.785 19:20:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.785 19:20:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.785 19:20:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:47.785 19:20:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:47.785 19:20:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:47.785 19:20:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:47.785 19:20:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:47.785 19:20:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:47.785 19:20:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.785 19:20:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:47.785 19:20:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:47.785 19:20:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:47.785 19:20:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:47.785 19:20:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:47.785 19:20:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:47.785 19:20:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.785 19:20:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:47.785 19:20:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:47.785 19:20:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:47.785 19:20:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:47.785 19:20:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:47.785 19:20:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:47.785 Cannot find device "nvmf_tgt_br" 00:19:47.785 19:20:25 -- nvmf/common.sh@154 -- # true 00:19:47.785 19:20:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:47.785 Cannot find device "nvmf_tgt_br2" 00:19:47.785 19:20:25 -- nvmf/common.sh@155 -- # true 00:19:47.785 19:20:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:47.785 19:20:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:47.785 Cannot find device "nvmf_tgt_br" 00:19:47.785 19:20:25 -- nvmf/common.sh@157 -- # true 00:19:47.785 19:20:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:47.785 Cannot find device "nvmf_tgt_br2" 00:19:47.785 19:20:25 -- nvmf/common.sh@158 -- # true 00:19:47.785 19:20:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:47.785 19:20:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:47.785 19:20:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:47.785 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:47.785 19:20:25 -- nvmf/common.sh@161 -- # true 00:19:47.785 19:20:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:47.785 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:47.785 19:20:25 -- nvmf/common.sh@162 -- # true 00:19:47.785 19:20:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:47.785 19:20:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:47.785 19:20:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:47.785 19:20:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:47.785 19:20:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:47.785 19:20:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:48.044 19:20:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:48.044 19:20:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:48.044 19:20:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:48.044 19:20:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:48.044 19:20:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:48.044 19:20:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:48.044 19:20:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:48.044 19:20:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:48.044 19:20:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:48.044 19:20:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:48.044 19:20:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:48.044 19:20:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:48.044 19:20:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:48.044 19:20:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:48.044 19:20:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:48.044 19:20:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:48.044 19:20:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:48.044 19:20:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:48.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:48.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:19:48.044 00:19:48.044 --- 10.0.0.2 ping statistics --- 00:19:48.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.044 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:19:48.044 19:20:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:48.044 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:48.044 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:19:48.044 00:19:48.044 --- 10.0.0.3 ping statistics --- 00:19:48.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.044 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:19:48.044 19:20:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:48.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:48.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:48.044 00:19:48.044 --- 10.0.0.1 ping statistics --- 00:19:48.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.044 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:48.044 19:20:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:48.044 19:20:25 -- nvmf/common.sh@421 -- # return 0 00:19:48.044 19:20:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:48.044 19:20:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:48.044 19:20:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:48.044 19:20:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:48.044 19:20:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:48.044 19:20:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:48.044 19:20:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:48.044 19:20:25 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:19:48.044 19:20:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:48.044 19:20:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:48.044 19:20:25 -- common/autotest_common.sh@10 -- # set +x 00:19:48.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.044 19:20:25 -- nvmf/common.sh@469 -- # nvmfpid=81120 00:19:48.044 19:20:25 -- nvmf/common.sh@470 -- # waitforlisten 81120 00:19:48.044 19:20:25 -- common/autotest_common.sh@817 -- # '[' -z 81120 ']' 00:19:48.044 19:20:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.044 19:20:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:48.044 19:20:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:48.044 19:20:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.044 19:20:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:48.044 19:20:25 -- common/autotest_common.sh@10 -- # set +x 00:19:48.044 [2024-02-14 19:20:25.424274] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:19:48.044 [2024-02-14 19:20:25.424355] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.303 [2024-02-14 19:20:25.561004] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.303 [2024-02-14 19:20:25.653953] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:48.303 [2024-02-14 19:20:25.654136] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.303 [2024-02-14 19:20:25.654154] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.303 [2024-02-14 19:20:25.654167] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.303 [2024-02-14 19:20:25.654199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.240 19:20:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:49.240 19:20:26 -- common/autotest_common.sh@850 -- # return 0 00:19:49.240 19:20:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:49.240 19:20:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:49.240 19:20:26 -- common/autotest_common.sh@10 -- # set +x 00:19:49.240 19:20:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.240 19:20:26 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:19:49.240 19:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.240 19:20:26 -- common/autotest_common.sh@10 -- # set +x 00:19:49.240 [2024-02-14 19:20:26.392424] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.240 19:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.240 19:20:26 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:19:49.240 19:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.240 19:20:26 -- common/autotest_common.sh@10 -- # set +x 00:19:49.240 null0 00:19:49.240 19:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.240 19:20:26 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:19:49.240 19:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.240 19:20:26 -- common/autotest_common.sh@10 -- # set +x 00:19:49.240 19:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.240 19:20:26 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:19:49.240 19:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.240 19:20:26 -- common/autotest_common.sh@10 -- # set +x 00:19:49.240 19:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.240 19:20:26 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 34f3cc1f1526420ca1ba9f44560c9270 00:19:49.240 19:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.240 19:20:26 -- common/autotest_common.sh@10 -- # set +x 00:19:49.240 19:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.240 19:20:26 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:49.240 19:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.240 19:20:26 -- common/autotest_common.sh@10 -- # set +x 00:19:49.240 [2024-02-14 19:20:26.432562] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.240 19:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.240 19:20:26 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:19:49.240 19:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.240 19:20:26 -- common/autotest_common.sh@10 -- # set +x 00:19:49.499 nvme0n1 00:19:49.499 19:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.499 19:20:26 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:49.499 19:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.499 19:20:26 -- common/autotest_common.sh@10 -- # set +x 00:19:49.499 [ 00:19:49.499 { 00:19:49.499 "aliases": [ 00:19:49.499 "34f3cc1f-1526-420c-a1ba-9f44560c9270" 00:19:49.499 ], 00:19:49.499 "assigned_rate_limits": { 00:19:49.499 "r_mbytes_per_sec": 0, 00:19:49.499 "rw_ios_per_sec": 0, 00:19:49.499 "rw_mbytes_per_sec": 0, 00:19:49.499 "w_mbytes_per_sec": 0 00:19:49.499 }, 00:19:49.499 "block_size": 512, 00:19:49.499 "claimed": false, 00:19:49.499 "driver_specific": { 00:19:49.499 "mp_policy": "active_passive", 00:19:49.499 "nvme": [ 00:19:49.499 { 00:19:49.499 "ctrlr_data": { 00:19:49.499 "ana_reporting": false, 00:19:49.499 "cntlid": 1, 00:19:49.499 "firmware_revision": "24.05", 00:19:49.499 "model_number": "SPDK bdev Controller", 00:19:49.499 "multi_ctrlr": true, 00:19:49.499 "oacs": { 00:19:49.500 "firmware": 0, 00:19:49.500 "format": 0, 00:19:49.500 "ns_manage": 0, 00:19:49.500 "security": 0 00:19:49.500 }, 00:19:49.500 "serial_number": "00000000000000000000", 00:19:49.500 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:49.500 "vendor_id": "0x8086" 00:19:49.500 }, 00:19:49.500 "ns_data": { 00:19:49.500 "can_share": true, 00:19:49.500 "id": 1 00:19:49.500 }, 00:19:49.500 "trid": { 00:19:49.500 "adrfam": "IPv4", 00:19:49.500 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:49.500 "traddr": "10.0.0.2", 00:19:49.500 "trsvcid": "4420", 00:19:49.500 "trtype": "TCP" 00:19:49.500 }, 00:19:49.500 "vs": { 00:19:49.500 "nvme_version": "1.3" 00:19:49.500 } 00:19:49.500 } 00:19:49.500 ] 00:19:49.500 }, 00:19:49.500 "name": "nvme0n1", 00:19:49.500 "num_blocks": 2097152, 00:19:49.500 "product_name": "NVMe disk", 00:19:49.500 "supported_io_types": { 00:19:49.500 "abort": true, 00:19:49.500 "compare": true, 00:19:49.500 "compare_and_write": true, 00:19:49.500 "flush": true, 00:19:49.500 "nvme_admin": true, 00:19:49.500 "nvme_io": true, 00:19:49.500 "read": true, 00:19:49.500 "reset": true, 00:19:49.500 "unmap": false, 00:19:49.500 "write": true, 00:19:49.500 "write_zeroes": true 00:19:49.500 }, 00:19:49.500 "uuid": "34f3cc1f-1526-420c-a1ba-9f44560c9270", 00:19:49.500 "zoned": false 00:19:49.500 } 00:19:49.500 ] 00:19:49.500 19:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.500 19:20:26 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:19:49.500 19:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.500 19:20:26 -- common/autotest_common.sh@10 -- # set +x 00:19:49.500 [2024-02-14 19:20:26.692484] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:49.500 [2024-02-14 19:20:26.692605] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220f690 (9): Bad file descriptor 00:19:49.500 [2024-02-14 19:20:26.824610] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:49.500 19:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.500 19:20:26 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:49.500 19:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.500 19:20:26 -- common/autotest_common.sh@10 -- # set +x 00:19:49.500 [ 00:19:49.500 { 00:19:49.500 "aliases": [ 00:19:49.500 "34f3cc1f-1526-420c-a1ba-9f44560c9270" 00:19:49.500 ], 00:19:49.500 "assigned_rate_limits": { 00:19:49.500 "r_mbytes_per_sec": 0, 00:19:49.500 "rw_ios_per_sec": 0, 00:19:49.500 "rw_mbytes_per_sec": 0, 00:19:49.500 "w_mbytes_per_sec": 0 00:19:49.500 }, 00:19:49.500 "block_size": 512, 00:19:49.500 "claimed": false, 00:19:49.500 "driver_specific": { 00:19:49.500 "mp_policy": "active_passive", 00:19:49.500 "nvme": [ 00:19:49.500 { 00:19:49.500 "ctrlr_data": { 00:19:49.500 "ana_reporting": false, 00:19:49.500 "cntlid": 2, 00:19:49.500 "firmware_revision": "24.05", 00:19:49.500 "model_number": "SPDK bdev Controller", 00:19:49.500 "multi_ctrlr": true, 00:19:49.500 "oacs": { 00:19:49.500 "firmware": 0, 00:19:49.500 "format": 0, 00:19:49.500 "ns_manage": 0, 00:19:49.500 "security": 0 00:19:49.500 }, 00:19:49.500 "serial_number": "00000000000000000000", 00:19:49.500 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:49.500 "vendor_id": "0x8086" 00:19:49.500 }, 00:19:49.500 "ns_data": { 00:19:49.500 "can_share": true, 00:19:49.500 "id": 1 00:19:49.500 }, 00:19:49.500 "trid": { 00:19:49.500 "adrfam": "IPv4", 00:19:49.500 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:49.500 "traddr": "10.0.0.2", 00:19:49.500 "trsvcid": "4420", 00:19:49.500 "trtype": "TCP" 00:19:49.500 }, 00:19:49.500 "vs": { 00:19:49.500 "nvme_version": "1.3" 00:19:49.500 } 00:19:49.500 } 00:19:49.500 ] 00:19:49.500 }, 00:19:49.500 "name": "nvme0n1", 00:19:49.500 "num_blocks": 2097152, 00:19:49.500 "product_name": "NVMe disk", 00:19:49.500 "supported_io_types": { 00:19:49.500 "abort": true, 00:19:49.500 "compare": true, 00:19:49.500 "compare_and_write": true, 00:19:49.500 "flush": true, 00:19:49.500 "nvme_admin": true, 00:19:49.500 "nvme_io": true, 00:19:49.500 "read": true, 00:19:49.500 "reset": true, 00:19:49.500 "unmap": false, 00:19:49.500 "write": true, 00:19:49.500 "write_zeroes": true 00:19:49.500 }, 00:19:49.500 "uuid": "34f3cc1f-1526-420c-a1ba-9f44560c9270", 00:19:49.500 "zoned": false 00:19:49.500 } 00:19:49.500 ] 00:19:49.500 19:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.500 19:20:26 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.500 19:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.500 19:20:26 -- common/autotest_common.sh@10 -- # set +x 00:19:49.500 19:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.500 19:20:26 -- host/async_init.sh@53 -- # mktemp 00:19:49.500 19:20:26 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.TDygvwMdbr 00:19:49.500 19:20:26 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:49.500 19:20:26 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.TDygvwMdbr 00:19:49.500 19:20:26 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:19:49.500 19:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.500 19:20:26 -- common/autotest_common.sh@10 -- # set +x 00:19:49.500 19:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.500 19:20:26 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:19:49.500 19:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.500 19:20:26 -- common/autotest_common.sh@10 -- # set +x 00:19:49.500 [2024-02-14 19:20:26.892655] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:49.500 [2024-02-14 19:20:26.892786] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:49.500 19:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.500 19:20:26 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TDygvwMdbr 00:19:49.500 19:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.500 19:20:26 -- common/autotest_common.sh@10 -- # set +x 00:19:49.500 19:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.500 19:20:26 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TDygvwMdbr 00:19:49.500 19:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.500 19:20:26 -- common/autotest_common.sh@10 -- # set +x 00:19:49.500 [2024-02-14 19:20:26.908690] bdev_nvme_rpc.c: 478:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:49.760 nvme0n1 00:19:49.760 19:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.760 19:20:26 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:49.760 19:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.760 19:20:26 -- common/autotest_common.sh@10 -- # set +x 00:19:49.760 [ 00:19:49.760 { 00:19:49.760 "aliases": [ 00:19:49.760 "34f3cc1f-1526-420c-a1ba-9f44560c9270" 00:19:49.760 ], 00:19:49.760 "assigned_rate_limits": { 00:19:49.760 "r_mbytes_per_sec": 0, 00:19:49.760 "rw_ios_per_sec": 0, 00:19:49.760 "rw_mbytes_per_sec": 0, 00:19:49.760 "w_mbytes_per_sec": 0 00:19:49.760 }, 00:19:49.760 "block_size": 512, 00:19:49.760 "claimed": false, 00:19:49.760 "driver_specific": { 00:19:49.760 "mp_policy": "active_passive", 00:19:49.760 "nvme": [ 00:19:49.760 { 00:19:49.760 "ctrlr_data": { 00:19:49.760 "ana_reporting": false, 00:19:49.760 "cntlid": 3, 00:19:49.760 "firmware_revision": "24.05", 00:19:49.760 "model_number": "SPDK bdev Controller", 00:19:49.760 "multi_ctrlr": true, 00:19:49.760 "oacs": { 00:19:49.760 "firmware": 0, 00:19:49.760 "format": 0, 00:19:49.760 "ns_manage": 0, 00:19:49.760 "security": 0 00:19:49.760 }, 00:19:49.760 "serial_number": "00000000000000000000", 00:19:49.760 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:49.760 "vendor_id": "0x8086" 00:19:49.760 }, 00:19:49.760 "ns_data": { 00:19:49.760 "can_share": true, 00:19:49.760 "id": 1 00:19:49.760 }, 00:19:49.760 "trid": { 00:19:49.760 "adrfam": "IPv4", 00:19:49.760 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:49.760 "traddr": "10.0.0.2", 00:19:49.760 "trsvcid": "4421", 00:19:49.760 "trtype": "TCP" 00:19:49.760 }, 00:19:49.760 "vs": { 00:19:49.760 "nvme_version": "1.3" 00:19:49.760 } 00:19:49.760 } 00:19:49.760 ] 00:19:49.760 }, 00:19:49.760 "name": "nvme0n1", 00:19:49.760 "num_blocks": 2097152, 00:19:49.760 "product_name": "NVMe disk", 00:19:49.760 "supported_io_types": { 00:19:49.760 "abort": true, 00:19:49.760 "compare": true, 00:19:49.760 "compare_and_write": true, 00:19:49.760 "flush": true, 00:19:49.760 "nvme_admin": true, 00:19:49.760 "nvme_io": true, 00:19:49.760 "read": true, 00:19:49.760 "reset": true, 00:19:49.760 "unmap": false, 00:19:49.760 "write": true, 00:19:49.760 "write_zeroes": true 00:19:49.760 }, 00:19:49.760 "uuid": "34f3cc1f-1526-420c-a1ba-9f44560c9270", 00:19:49.760 "zoned": false 00:19:49.760 } 00:19:49.760 ] 00:19:49.760 19:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.760 19:20:26 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.760 19:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.760 19:20:27 -- common/autotest_common.sh@10 -- # set +x 00:19:49.760 19:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.760 19:20:27 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.TDygvwMdbr 00:19:49.760 19:20:27 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:19:49.760 19:20:27 -- host/async_init.sh@78 -- # nvmftestfini 00:19:49.760 19:20:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:49.760 19:20:27 -- nvmf/common.sh@116 -- # sync 00:19:49.760 19:20:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:49.760 19:20:27 -- nvmf/common.sh@119 -- # set +e 00:19:49.760 19:20:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:49.760 19:20:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:49.760 rmmod nvme_tcp 00:19:49.760 rmmod nvme_fabrics 00:19:49.760 rmmod nvme_keyring 00:19:49.760 19:20:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:49.760 19:20:27 -- nvmf/common.sh@123 -- # set -e 00:19:49.760 19:20:27 -- nvmf/common.sh@124 -- # return 0 00:19:49.760 19:20:27 -- nvmf/common.sh@477 -- # '[' -n 81120 ']' 00:19:49.760 19:20:27 -- nvmf/common.sh@478 -- # killprocess 81120 00:19:49.760 19:20:27 -- common/autotest_common.sh@924 -- # '[' -z 81120 ']' 00:19:49.760 19:20:27 -- common/autotest_common.sh@928 -- # kill -0 81120 00:19:49.760 19:20:27 -- common/autotest_common.sh@929 -- # uname 00:19:49.760 19:20:27 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:49.760 19:20:27 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 81120 00:19:49.760 killing process with pid 81120 00:19:49.760 19:20:27 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:19:49.760 19:20:27 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:19:49.760 19:20:27 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 81120' 00:19:49.760 19:20:27 -- common/autotest_common.sh@943 -- # kill 81120 00:19:49.760 19:20:27 -- common/autotest_common.sh@948 -- # wait 81120 00:19:50.019 19:20:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:50.019 19:20:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:50.019 19:20:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:50.019 19:20:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:50.019 19:20:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:50.019 19:20:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.019 19:20:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:50.019 19:20:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.019 19:20:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:50.019 00:19:50.019 real 0m2.471s 00:19:50.019 user 0m2.289s 00:19:50.019 sys 0m0.596s 00:19:50.019 19:20:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:50.019 ************************************ 00:19:50.020 END TEST nvmf_async_init 00:19:50.020 ************************************ 00:19:50.020 19:20:27 -- common/autotest_common.sh@10 -- # set +x 00:19:50.279 19:20:27 -- nvmf/nvmf.sh@93 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:50.279 19:20:27 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:19:50.279 19:20:27 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:19:50.279 19:20:27 -- common/autotest_common.sh@10 -- # set +x 00:19:50.279 ************************************ 00:19:50.279 START TEST dma 00:19:50.279 ************************************ 00:19:50.279 19:20:27 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:50.279 * Looking for test storage... 00:19:50.279 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:50.279 19:20:27 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:50.279 19:20:27 -- nvmf/common.sh@7 -- # uname -s 00:19:50.279 19:20:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:50.279 19:20:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:50.279 19:20:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:50.279 19:20:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:50.279 19:20:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:50.279 19:20:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:50.279 19:20:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:50.279 19:20:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:50.279 19:20:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:50.279 19:20:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:50.279 19:20:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:19:50.279 19:20:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:19:50.279 19:20:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:50.279 19:20:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:50.279 19:20:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:50.279 19:20:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:50.279 19:20:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:50.279 19:20:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:50.279 19:20:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:50.279 19:20:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.279 19:20:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.279 19:20:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.279 19:20:27 -- paths/export.sh@5 -- # export PATH 00:19:50.279 19:20:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.279 19:20:27 -- nvmf/common.sh@46 -- # : 0 00:19:50.279 19:20:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:50.279 19:20:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:50.279 19:20:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:50.279 19:20:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:50.279 19:20:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:50.279 19:20:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:50.279 19:20:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:50.279 19:20:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:50.279 19:20:27 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:19:50.279 19:20:27 -- host/dma.sh@13 -- # exit 0 00:19:50.279 00:19:50.279 real 0m0.107s 00:19:50.279 user 0m0.045s 00:19:50.279 sys 0m0.068s 00:19:50.279 ************************************ 00:19:50.279 END TEST dma 00:19:50.279 ************************************ 00:19:50.279 19:20:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:50.279 19:20:27 -- common/autotest_common.sh@10 -- # set +x 00:19:50.279 19:20:27 -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:50.279 19:20:27 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:19:50.279 19:20:27 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:19:50.279 19:20:27 -- common/autotest_common.sh@10 -- # set +x 00:19:50.279 ************************************ 00:19:50.279 START TEST nvmf_identify 00:19:50.279 ************************************ 00:19:50.279 19:20:27 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:50.279 * Looking for test storage... 00:19:50.538 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:50.538 19:20:27 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:50.538 19:20:27 -- nvmf/common.sh@7 -- # uname -s 00:19:50.538 19:20:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:50.538 19:20:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:50.539 19:20:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:50.539 19:20:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:50.539 19:20:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:50.539 19:20:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:50.539 19:20:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:50.539 19:20:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:50.539 19:20:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:50.539 19:20:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:50.539 19:20:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:19:50.539 19:20:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:19:50.539 19:20:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:50.539 19:20:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:50.539 19:20:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:50.539 19:20:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:50.539 19:20:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:50.539 19:20:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:50.539 19:20:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:50.539 19:20:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.539 19:20:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.539 19:20:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.539 19:20:27 -- paths/export.sh@5 -- # export PATH 00:19:50.539 19:20:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.539 19:20:27 -- nvmf/common.sh@46 -- # : 0 00:19:50.539 19:20:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:50.539 19:20:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:50.539 19:20:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:50.539 19:20:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:50.539 19:20:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:50.539 19:20:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:50.539 19:20:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:50.539 19:20:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:50.539 19:20:27 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:50.539 19:20:27 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:50.539 19:20:27 -- host/identify.sh@14 -- # nvmftestinit 00:19:50.539 19:20:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:50.539 19:20:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:50.539 19:20:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:50.539 19:20:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:50.539 19:20:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:50.539 19:20:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.539 19:20:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:50.539 19:20:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.539 19:20:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:50.539 19:20:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:50.539 19:20:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:50.539 19:20:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:50.539 19:20:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:50.539 19:20:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:50.539 19:20:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:50.539 19:20:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:50.539 19:20:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:50.539 19:20:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:50.539 19:20:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:50.539 19:20:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:50.539 19:20:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:50.539 19:20:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:50.539 19:20:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:50.539 19:20:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:50.539 19:20:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:50.539 19:20:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:50.539 19:20:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:50.539 19:20:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:50.539 Cannot find device "nvmf_tgt_br" 00:19:50.539 19:20:27 -- nvmf/common.sh@154 -- # true 00:19:50.539 19:20:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:50.539 Cannot find device "nvmf_tgt_br2" 00:19:50.539 19:20:27 -- nvmf/common.sh@155 -- # true 00:19:50.539 19:20:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:50.539 19:20:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:50.539 Cannot find device "nvmf_tgt_br" 00:19:50.539 19:20:27 -- nvmf/common.sh@157 -- # true 00:19:50.539 19:20:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:50.539 Cannot find device "nvmf_tgt_br2" 00:19:50.539 19:20:27 -- nvmf/common.sh@158 -- # true 00:19:50.539 19:20:27 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:50.539 19:20:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:50.539 19:20:27 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:50.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:50.539 19:20:27 -- nvmf/common.sh@161 -- # true 00:19:50.539 19:20:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:50.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:50.539 19:20:27 -- nvmf/common.sh@162 -- # true 00:19:50.539 19:20:27 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:50.539 19:20:27 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:50.539 19:20:27 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:50.539 19:20:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:50.539 19:20:27 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:50.539 19:20:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:50.798 19:20:27 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:50.798 19:20:27 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:50.798 19:20:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:50.798 19:20:27 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:50.798 19:20:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:50.798 19:20:27 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:50.798 19:20:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:50.798 19:20:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:50.798 19:20:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:50.798 19:20:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:50.798 19:20:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:50.798 19:20:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:50.798 19:20:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:50.798 19:20:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:50.798 19:20:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:50.799 19:20:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:50.799 19:20:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:50.799 19:20:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:50.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:50.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:19:50.799 00:19:50.799 --- 10.0.0.2 ping statistics --- 00:19:50.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.799 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:19:50.799 19:20:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:50.799 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:50.799 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:19:50.799 00:19:50.799 --- 10.0.0.3 ping statistics --- 00:19:50.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.799 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:50.799 19:20:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:50.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:50.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:19:50.799 00:19:50.799 --- 10.0.0.1 ping statistics --- 00:19:50.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.799 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:19:50.799 19:20:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:50.799 19:20:28 -- nvmf/common.sh@421 -- # return 0 00:19:50.799 19:20:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:50.799 19:20:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:50.799 19:20:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:50.799 19:20:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:50.799 19:20:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:50.799 19:20:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:50.799 19:20:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:50.799 19:20:28 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:19:50.799 19:20:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:50.799 19:20:28 -- common/autotest_common.sh@10 -- # set +x 00:19:50.799 19:20:28 -- host/identify.sh@19 -- # nvmfpid=81383 00:19:50.799 19:20:28 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:50.799 19:20:28 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:50.799 19:20:28 -- host/identify.sh@23 -- # waitforlisten 81383 00:19:50.799 19:20:28 -- common/autotest_common.sh@817 -- # '[' -z 81383 ']' 00:19:50.799 19:20:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.799 19:20:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:50.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.799 19:20:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.799 19:20:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:50.799 19:20:28 -- common/autotest_common.sh@10 -- # set +x 00:19:50.799 [2024-02-14 19:20:28.177617] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:19:50.799 [2024-02-14 19:20:28.177711] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.058 [2024-02-14 19:20:28.314595] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:51.058 [2024-02-14 19:20:28.395412] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:51.058 [2024-02-14 19:20:28.395815] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.058 [2024-02-14 19:20:28.395930] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.058 [2024-02-14 19:20:28.396072] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.058 [2024-02-14 19:20:28.396230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.058 [2024-02-14 19:20:28.396855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.058 [2024-02-14 19:20:28.396999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:51.058 [2024-02-14 19:20:28.397005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.995 19:20:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:51.995 19:20:29 -- common/autotest_common.sh@850 -- # return 0 00:19:51.995 19:20:29 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:51.995 19:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.995 19:20:29 -- common/autotest_common.sh@10 -- # set +x 00:19:51.995 [2024-02-14 19:20:29.106588] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:51.995 19:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.995 19:20:29 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:19:51.995 19:20:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:51.995 19:20:29 -- common/autotest_common.sh@10 -- # set +x 00:19:51.995 19:20:29 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:51.995 19:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.995 19:20:29 -- common/autotest_common.sh@10 -- # set +x 00:19:51.995 Malloc0 00:19:51.995 19:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.995 19:20:29 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:51.995 19:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.995 19:20:29 -- common/autotest_common.sh@10 -- # set +x 00:19:51.995 19:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.995 19:20:29 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:19:51.995 19:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.995 19:20:29 -- common/autotest_common.sh@10 -- # set +x 00:19:51.995 19:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.995 19:20:29 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:51.995 19:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.995 19:20:29 -- common/autotest_common.sh@10 -- # set +x 00:19:51.995 [2024-02-14 19:20:29.214065] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:51.995 19:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.995 19:20:29 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:51.995 19:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.995 19:20:29 -- common/autotest_common.sh@10 -- # set +x 00:19:51.996 19:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.996 19:20:29 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:19:51.996 19:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.996 19:20:29 -- common/autotest_common.sh@10 -- # set +x 00:19:51.996 [2024-02-14 19:20:29.229819] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:19:51.996 [ 00:19:51.996 { 00:19:51.996 "allow_any_host": true, 00:19:51.996 "hosts": [], 00:19:51.996 "listen_addresses": [ 00:19:51.996 { 00:19:51.996 "adrfam": "IPv4", 00:19:51.996 "traddr": "10.0.0.2", 00:19:51.996 "transport": "TCP", 00:19:51.996 "trsvcid": "4420", 00:19:51.996 "trtype": "TCP" 00:19:51.996 } 00:19:51.996 ], 00:19:51.996 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:51.996 "subtype": "Discovery" 00:19:51.996 }, 00:19:51.996 { 00:19:51.996 "allow_any_host": true, 00:19:51.996 "hosts": [], 00:19:51.996 "listen_addresses": [ 00:19:51.996 { 00:19:51.996 "adrfam": "IPv4", 00:19:51.996 "traddr": "10.0.0.2", 00:19:51.996 "transport": "TCP", 00:19:51.996 "trsvcid": "4420", 00:19:51.996 "trtype": "TCP" 00:19:51.996 } 00:19:51.996 ], 00:19:51.996 "max_cntlid": 65519, 00:19:51.996 "max_namespaces": 32, 00:19:51.996 "min_cntlid": 1, 00:19:51.996 "model_number": "SPDK bdev Controller", 00:19:51.996 "namespaces": [ 00:19:51.996 { 00:19:51.996 "bdev_name": "Malloc0", 00:19:51.996 "eui64": "ABCDEF0123456789", 00:19:51.996 "name": "Malloc0", 00:19:51.996 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:19:51.996 "nsid": 1, 00:19:51.996 "uuid": "c856abcd-be3c-4c56-9901-572a614c8b23" 00:19:51.996 } 00:19:51.996 ], 00:19:51.996 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.996 "serial_number": "SPDK00000000000001", 00:19:51.996 "subtype": "NVMe" 00:19:51.996 } 00:19:51.996 ] 00:19:51.996 19:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.996 19:20:29 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:19:51.996 [2024-02-14 19:20:29.261802] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:19:51.996 [2024-02-14 19:20:29.262007] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81436 ] 00:19:51.996 [2024-02-14 19:20:29.393483] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:19:51.996 [2024-02-14 19:20:29.393580] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:51.996 [2024-02-14 19:20:29.393588] nvme_tcp.c:2246:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:51.996 [2024-02-14 19:20:29.393601] nvme_tcp.c:2264:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:51.996 [2024-02-14 19:20:29.393613] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:51.996 [2024-02-14 19:20:29.393802] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:19:51.996 [2024-02-14 19:20:29.393858] nvme_tcp.c:1485:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xcd8410 0 00:19:51.996 [2024-02-14 19:20:29.409504] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:51.996 [2024-02-14 19:20:29.409533] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:51.996 [2024-02-14 19:20:29.409539] nvme_tcp.c:1531:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:51.996 [2024-02-14 19:20:29.409542] nvme_tcp.c:1532:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:51.996 [2024-02-14 19:20:29.409595] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:51.996 [2024-02-14 19:20:29.409604] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:51.996 [2024-02-14 19:20:29.409608] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd8410) 00:19:51.996 [2024-02-14 19:20:29.409626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:51.996 [2024-02-14 19:20:29.409677] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17870, cid 0, qid 0 00:19:52.261 [2024-02-14 19:20:29.417507] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.261 [2024-02-14 19:20:29.417545] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.261 [2024-02-14 19:20:29.417551] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.261 [2024-02-14 19:20:29.417556] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17870) on tqpair=0xcd8410 00:19:52.261 [2024-02-14 19:20:29.417572] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:52.261 [2024-02-14 19:20:29.417580] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:19:52.261 [2024-02-14 19:20:29.417586] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:19:52.261 [2024-02-14 19:20:29.417608] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.261 [2024-02-14 19:20:29.417613] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.261 [2024-02-14 19:20:29.417617] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd8410) 00:19:52.261 [2024-02-14 19:20:29.417625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.261 [2024-02-14 19:20:29.417658] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17870, cid 0, qid 0 00:19:52.261 [2024-02-14 19:20:29.417755] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.261 [2024-02-14 19:20:29.417763] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.261 [2024-02-14 19:20:29.417766] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.261 [2024-02-14 19:20:29.417770] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17870) on tqpair=0xcd8410 00:19:52.261 [2024-02-14 19:20:29.417782] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:19:52.261 [2024-02-14 19:20:29.417790] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:19:52.261 [2024-02-14 19:20:29.417799] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.261 [2024-02-14 19:20:29.417803] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.261 [2024-02-14 19:20:29.417806] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd8410) 00:19:52.261 [2024-02-14 19:20:29.417814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.261 [2024-02-14 19:20:29.417838] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17870, cid 0, qid 0 00:19:52.261 [2024-02-14 19:20:29.417892] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.261 [2024-02-14 19:20:29.417900] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.262 [2024-02-14 19:20:29.417903] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.262 [2024-02-14 19:20:29.417907] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17870) on tqpair=0xcd8410 00:19:52.262 [2024-02-14 19:20:29.417914] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:19:52.262 [2024-02-14 19:20:29.417923] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:19:52.262 [2024-02-14 19:20:29.417931] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.262 [2024-02-14 19:20:29.417935] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.262 [2024-02-14 19:20:29.417938] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd8410) 00:19:52.262 [2024-02-14 19:20:29.417946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.262 [2024-02-14 19:20:29.417969] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17870, cid 0, qid 0 00:19:52.262 [2024-02-14 19:20:29.418020] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.262 [2024-02-14 19:20:29.418037] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.262 [2024-02-14 19:20:29.418042] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.262 [2024-02-14 19:20:29.418046] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17870) on tqpair=0xcd8410 00:19:52.262 [2024-02-14 19:20:29.418053] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:52.262 [2024-02-14 19:20:29.418064] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.262 [2024-02-14 19:20:29.418070] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.262 [2024-02-14 19:20:29.418073] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd8410) 00:19:52.262 [2024-02-14 19:20:29.418080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.262 [2024-02-14 19:20:29.418109] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17870, cid 0, qid 0 00:19:52.262 [2024-02-14 19:20:29.418158] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.262 [2024-02-14 19:20:29.418173] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.262 [2024-02-14 19:20:29.418179] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.262 [2024-02-14 19:20:29.418183] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17870) on tqpair=0xcd8410 00:19:52.262 [2024-02-14 19:20:29.418188] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:19:52.262 [2024-02-14 19:20:29.418204] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:19:52.262 [2024-02-14 19:20:29.418212] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:52.262 [2024-02-14 19:20:29.418318] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:19:52.262 [2024-02-14 19:20:29.418340] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:52.262 [2024-02-14 19:20:29.418351] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.262 [2024-02-14 19:20:29.418355] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.262 [2024-02-14 19:20:29.418359] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd8410) 00:19:52.262 [2024-02-14 19:20:29.418366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.262 [2024-02-14 19:20:29.418391] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17870, cid 0, qid 0 00:19:52.262 [2024-02-14 19:20:29.418440] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.262 [2024-02-14 19:20:29.418448] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.262 [2024-02-14 19:20:29.418451] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.262 [2024-02-14 19:20:29.418455] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17870) on tqpair=0xcd8410 00:19:52.262 [2024-02-14 19:20:29.418461] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:52.262 [2024-02-14 19:20:29.418471] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.262 [2024-02-14 19:20:29.418476] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.262 [2024-02-14 19:20:29.418479] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd8410) 00:19:52.262 [2024-02-14 19:20:29.418486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.262 [2024-02-14 19:20:29.418510] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17870, cid 0, qid 0 00:19:52.262 [2024-02-14 19:20:29.418630] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.262 [2024-02-14 19:20:29.418644] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.262 [2024-02-14 19:20:29.418649] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.262 [2024-02-14 19:20:29.418653] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17870) on tqpair=0xcd8410 00:19:52.262 [2024-02-14 19:20:29.418658] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:52.262 [2024-02-14 19:20:29.418664] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:19:52.262 [2024-02-14 19:20:29.418672] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:19:52.262 [2024-02-14 19:20:29.418683] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:19:52.262 [2024-02-14 19:20:29.418695] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.262 [2024-02-14 19:20:29.418699] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.262 [2024-02-14 19:20:29.418703] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd8410) 00:19:52.262 [2024-02-14 19:20:29.418711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.262 [2024-02-14 19:20:29.418738] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17870, cid 0, qid 0 00:19:52.262 [2024-02-14 19:20:29.418869] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:52.262 [2024-02-14 19:20:29.418887] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:52.262 [2024-02-14 19:20:29.418893] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:52.262 [2024-02-14 19:20:29.418897] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd8410): datao=0, datal=4096, cccid=0 00:19:52.262 [2024-02-14 19:20:29.418902] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd17870) on tqpair(0xcd8410): expected_datao=0, payload_size=4096 00:19:52.262 [2024-02-14 19:20:29.418921] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:52.262 [2024-02-14 19:20:29.418925] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:52.262 [2024-02-14 19:20:29.418935] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.262 [2024-02-14 19:20:29.418942] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.262 [2024-02-14 19:20:29.418945] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.262 [2024-02-14 19:20:29.418949] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17870) on tqpair=0xcd8410 00:19:52.262 [2024-02-14 19:20:29.418959] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:19:52.262 [2024-02-14 19:20:29.418971] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:19:52.262 [2024-02-14 19:20:29.418976] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:19:52.262 [2024-02-14 19:20:29.418981] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:19:52.262 [2024-02-14 19:20:29.418986] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:19:52.262 [2024-02-14 19:20:29.418991] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:19:52.262 [2024-02-14 19:20:29.419001] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:19:52.262 [2024-02-14 19:20:29.419009] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.262 [2024-02-14 19:20:29.419015] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.262 [2024-02-14 19:20:29.419018] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd8410) 00:19:52.262 [2024-02-14 19:20:29.419026] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:52.262 [2024-02-14 19:20:29.419054] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17870, cid 0, qid 0 00:19:52.262 [2024-02-14 19:20:29.419159] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.262 [2024-02-14 19:20:29.419171] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.262 [2024-02-14 19:20:29.419176] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.262 [2024-02-14 19:20:29.419180] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17870) on tqpair=0xcd8410 00:19:52.262 [2024-02-14 19:20:29.419188] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.262 [2024-02-14 19:20:29.419192] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.262 [2024-02-14 19:20:29.419196] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd8410) 00:19:52.262 [2024-02-14 19:20:29.419202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.262 [2024-02-14 19:20:29.419208] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.262 [2024-02-14 19:20:29.419212] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.262 [2024-02-14 19:20:29.419215] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xcd8410) 00:19:52.262 [2024-02-14 19:20:29.419220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.262 [2024-02-14 19:20:29.419226] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.262 [2024-02-14 19:20:29.419230] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.262 [2024-02-14 19:20:29.419233] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xcd8410) 00:19:52.263 [2024-02-14 19:20:29.419238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.263 [2024-02-14 19:20:29.419244] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.419247] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.419250] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8410) 00:19:52.263 [2024-02-14 19:20:29.419255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.263 [2024-02-14 19:20:29.419261] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:19:52.263 [2024-02-14 19:20:29.419287] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:52.263 [2024-02-14 19:20:29.419296] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.419300] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.419303] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd8410) 00:19:52.263 [2024-02-14 19:20:29.419310] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.263 [2024-02-14 19:20:29.419348] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17870, cid 0, qid 0 00:19:52.263 [2024-02-14 19:20:29.419377] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd179d0, cid 1, qid 0 00:19:52.263 [2024-02-14 19:20:29.419383] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17b30, cid 2, qid 0 00:19:52.263 [2024-02-14 19:20:29.419388] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17c90, cid 3, qid 0 00:19:52.263 [2024-02-14 19:20:29.419392] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17df0, cid 4, qid 0 00:19:52.263 [2024-02-14 19:20:29.419464] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.263 [2024-02-14 19:20:29.419478] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.263 [2024-02-14 19:20:29.419483] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.419499] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17df0) on tqpair=0xcd8410 00:19:52.263 [2024-02-14 19:20:29.419506] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:19:52.263 [2024-02-14 19:20:29.419512] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:19:52.263 [2024-02-14 19:20:29.419525] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.419531] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.419551] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd8410) 00:19:52.263 [2024-02-14 19:20:29.419559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.263 [2024-02-14 19:20:29.419586] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17df0, cid 4, qid 0 00:19:52.263 [2024-02-14 19:20:29.419653] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:52.263 [2024-02-14 19:20:29.419667] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:52.263 [2024-02-14 19:20:29.419672] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.419675] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd8410): datao=0, datal=4096, cccid=4 00:19:52.263 [2024-02-14 19:20:29.419680] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd17df0) on tqpair(0xcd8410): expected_datao=0, payload_size=4096 00:19:52.263 [2024-02-14 19:20:29.419688] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.419692] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.419702] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.263 [2024-02-14 19:20:29.419708] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.263 [2024-02-14 19:20:29.419712] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.419716] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17df0) on tqpair=0xcd8410 00:19:52.263 [2024-02-14 19:20:29.419731] nvme_ctrlr.c:4023:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:19:52.263 [2024-02-14 19:20:29.419754] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.419762] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.419766] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd8410) 00:19:52.263 [2024-02-14 19:20:29.419773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.263 [2024-02-14 19:20:29.419781] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.419785] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.419788] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcd8410) 00:19:52.263 [2024-02-14 19:20:29.419794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.263 [2024-02-14 19:20:29.419829] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17df0, cid 4, qid 0 00:19:52.263 [2024-02-14 19:20:29.419837] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17f50, cid 5, qid 0 00:19:52.263 [2024-02-14 19:20:29.419956] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:52.263 [2024-02-14 19:20:29.419970] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:52.263 [2024-02-14 19:20:29.419976] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.419979] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd8410): datao=0, datal=1024, cccid=4 00:19:52.263 [2024-02-14 19:20:29.419984] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd17df0) on tqpair(0xcd8410): expected_datao=0, payload_size=1024 00:19:52.263 [2024-02-14 19:20:29.419991] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.419995] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.420000] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.263 [2024-02-14 19:20:29.420006] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.263 [2024-02-14 19:20:29.420010] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.420014] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17f50) on tqpair=0xcd8410 00:19:52.263 [2024-02-14 19:20:29.463561] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.263 [2024-02-14 19:20:29.463585] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.263 [2024-02-14 19:20:29.463590] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.463595] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17df0) on tqpair=0xcd8410 00:19:52.263 [2024-02-14 19:20:29.463617] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.463624] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.463628] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd8410) 00:19:52.263 [2024-02-14 19:20:29.463636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.263 [2024-02-14 19:20:29.463688] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17df0, cid 4, qid 0 00:19:52.263 [2024-02-14 19:20:29.463776] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:52.263 [2024-02-14 19:20:29.463784] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:52.263 [2024-02-14 19:20:29.463788] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.463791] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd8410): datao=0, datal=3072, cccid=4 00:19:52.263 [2024-02-14 19:20:29.463796] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd17df0) on tqpair(0xcd8410): expected_datao=0, payload_size=3072 00:19:52.263 [2024-02-14 19:20:29.463803] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.463807] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.463833] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.263 [2024-02-14 19:20:29.463840] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.263 [2024-02-14 19:20:29.463843] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.463847] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17df0) on tqpair=0xcd8410 00:19:52.263 [2024-02-14 19:20:29.463861] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.463866] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.463869] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd8410) 00:19:52.263 [2024-02-14 19:20:29.463876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.263 [2024-02-14 19:20:29.463921] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17df0, cid 4, qid 0 00:19:52.263 [2024-02-14 19:20:29.463994] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:52.263 [2024-02-14 19:20:29.464001] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:52.263 [2024-02-14 19:20:29.464005] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.464009] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd8410): datao=0, datal=8, cccid=4 00:19:52.263 [2024-02-14 19:20:29.464013] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd17df0) on tqpair(0xcd8410): expected_datao=0, payload_size=8 00:19:52.263 [2024-02-14 19:20:29.464020] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.464024] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.504564] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.263 [2024-02-14 19:20:29.504591] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.263 [2024-02-14 19:20:29.504615] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.263 [2024-02-14 19:20:29.504620] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17df0) on tqpair=0xcd8410 00:19:52.263 ===================================================== 00:19:52.263 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:52.263 ===================================================== 00:19:52.263 Controller Capabilities/Features 00:19:52.264 ================================ 00:19:52.264 Vendor ID: 0000 00:19:52.264 Subsystem Vendor ID: 0000 00:19:52.264 Serial Number: .................... 00:19:52.264 Model Number: ........................................ 00:19:52.264 Firmware Version: 24.05 00:19:52.264 Recommended Arb Burst: 0 00:19:52.264 IEEE OUI Identifier: 00 00 00 00:19:52.264 Multi-path I/O 00:19:52.264 May have multiple subsystem ports: No 00:19:52.264 May have multiple controllers: No 00:19:52.264 Associated with SR-IOV VF: No 00:19:52.264 Max Data Transfer Size: 131072 00:19:52.264 Max Number of Namespaces: 0 00:19:52.264 Max Number of I/O Queues: 1024 00:19:52.264 NVMe Specification Version (VS): 1.3 00:19:52.264 NVMe Specification Version (Identify): 1.3 00:19:52.264 Maximum Queue Entries: 128 00:19:52.264 Contiguous Queues Required: Yes 00:19:52.264 Arbitration Mechanisms Supported 00:19:52.264 Weighted Round Robin: Not Supported 00:19:52.264 Vendor Specific: Not Supported 00:19:52.264 Reset Timeout: 15000 ms 00:19:52.264 Doorbell Stride: 4 bytes 00:19:52.264 NVM Subsystem Reset: Not Supported 00:19:52.264 Command Sets Supported 00:19:52.264 NVM Command Set: Supported 00:19:52.264 Boot Partition: Not Supported 00:19:52.264 Memory Page Size Minimum: 4096 bytes 00:19:52.264 Memory Page Size Maximum: 4096 bytes 00:19:52.264 Persistent Memory Region: Not Supported 00:19:52.264 Optional Asynchronous Events Supported 00:19:52.264 Namespace Attribute Notices: Not Supported 00:19:52.264 Firmware Activation Notices: Not Supported 00:19:52.264 ANA Change Notices: Not Supported 00:19:52.264 PLE Aggregate Log Change Notices: Not Supported 00:19:52.264 LBA Status Info Alert Notices: Not Supported 00:19:52.264 EGE Aggregate Log Change Notices: Not Supported 00:19:52.264 Normal NVM Subsystem Shutdown event: Not Supported 00:19:52.264 Zone Descriptor Change Notices: Not Supported 00:19:52.264 Discovery Log Change Notices: Supported 00:19:52.264 Controller Attributes 00:19:52.264 128-bit Host Identifier: Not Supported 00:19:52.264 Non-Operational Permissive Mode: Not Supported 00:19:52.264 NVM Sets: Not Supported 00:19:52.264 Read Recovery Levels: Not Supported 00:19:52.264 Endurance Groups: Not Supported 00:19:52.264 Predictable Latency Mode: Not Supported 00:19:52.264 Traffic Based Keep ALive: Not Supported 00:19:52.264 Namespace Granularity: Not Supported 00:19:52.264 SQ Associations: Not Supported 00:19:52.264 UUID List: Not Supported 00:19:52.264 Multi-Domain Subsystem: Not Supported 00:19:52.264 Fixed Capacity Management: Not Supported 00:19:52.264 Variable Capacity Management: Not Supported 00:19:52.264 Delete Endurance Group: Not Supported 00:19:52.264 Delete NVM Set: Not Supported 00:19:52.264 Extended LBA Formats Supported: Not Supported 00:19:52.264 Flexible Data Placement Supported: Not Supported 00:19:52.264 00:19:52.264 Controller Memory Buffer Support 00:19:52.264 ================================ 00:19:52.264 Supported: No 00:19:52.264 00:19:52.264 Persistent Memory Region Support 00:19:52.264 ================================ 00:19:52.264 Supported: No 00:19:52.264 00:19:52.264 Admin Command Set Attributes 00:19:52.264 ============================ 00:19:52.264 Security Send/Receive: Not Supported 00:19:52.264 Format NVM: Not Supported 00:19:52.264 Firmware Activate/Download: Not Supported 00:19:52.264 Namespace Management: Not Supported 00:19:52.264 Device Self-Test: Not Supported 00:19:52.264 Directives: Not Supported 00:19:52.264 NVMe-MI: Not Supported 00:19:52.264 Virtualization Management: Not Supported 00:19:52.264 Doorbell Buffer Config: Not Supported 00:19:52.264 Get LBA Status Capability: Not Supported 00:19:52.264 Command & Feature Lockdown Capability: Not Supported 00:19:52.264 Abort Command Limit: 1 00:19:52.264 Async Event Request Limit: 4 00:19:52.264 Number of Firmware Slots: N/A 00:19:52.264 Firmware Slot 1 Read-Only: N/A 00:19:52.264 Firmware Activation Without Reset: N/A 00:19:52.264 Multiple Update Detection Support: N/A 00:19:52.264 Firmware Update Granularity: No Information Provided 00:19:52.264 Per-Namespace SMART Log: No 00:19:52.264 Asymmetric Namespace Access Log Page: Not Supported 00:19:52.264 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:52.264 Command Effects Log Page: Not Supported 00:19:52.264 Get Log Page Extended Data: Supported 00:19:52.264 Telemetry Log Pages: Not Supported 00:19:52.264 Persistent Event Log Pages: Not Supported 00:19:52.264 Supported Log Pages Log Page: May Support 00:19:52.264 Commands Supported & Effects Log Page: Not Supported 00:19:52.264 Feature Identifiers & Effects Log Page:May Support 00:19:52.264 NVMe-MI Commands & Effects Log Page: May Support 00:19:52.264 Data Area 4 for Telemetry Log: Not Supported 00:19:52.264 Error Log Page Entries Supported: 128 00:19:52.264 Keep Alive: Not Supported 00:19:52.264 00:19:52.264 NVM Command Set Attributes 00:19:52.264 ========================== 00:19:52.264 Submission Queue Entry Size 00:19:52.264 Max: 1 00:19:52.264 Min: 1 00:19:52.264 Completion Queue Entry Size 00:19:52.264 Max: 1 00:19:52.264 Min: 1 00:19:52.264 Number of Namespaces: 0 00:19:52.264 Compare Command: Not Supported 00:19:52.264 Write Uncorrectable Command: Not Supported 00:19:52.264 Dataset Management Command: Not Supported 00:19:52.264 Write Zeroes Command: Not Supported 00:19:52.264 Set Features Save Field: Not Supported 00:19:52.264 Reservations: Not Supported 00:19:52.264 Timestamp: Not Supported 00:19:52.264 Copy: Not Supported 00:19:52.264 Volatile Write Cache: Not Present 00:19:52.264 Atomic Write Unit (Normal): 1 00:19:52.264 Atomic Write Unit (PFail): 1 00:19:52.264 Atomic Compare & Write Unit: 1 00:19:52.264 Fused Compare & Write: Supported 00:19:52.264 Scatter-Gather List 00:19:52.264 SGL Command Set: Supported 00:19:52.264 SGL Keyed: Supported 00:19:52.264 SGL Bit Bucket Descriptor: Not Supported 00:19:52.264 SGL Metadata Pointer: Not Supported 00:19:52.264 Oversized SGL: Not Supported 00:19:52.264 SGL Metadata Address: Not Supported 00:19:52.264 SGL Offset: Supported 00:19:52.264 Transport SGL Data Block: Not Supported 00:19:52.264 Replay Protected Memory Block: Not Supported 00:19:52.264 00:19:52.264 Firmware Slot Information 00:19:52.264 ========================= 00:19:52.264 Active slot: 0 00:19:52.264 00:19:52.264 00:19:52.264 Error Log 00:19:52.264 ========= 00:19:52.264 00:19:52.264 Active Namespaces 00:19:52.264 ================= 00:19:52.264 Discovery Log Page 00:19:52.264 ================== 00:19:52.264 Generation Counter: 2 00:19:52.264 Number of Records: 2 00:19:52.264 Record Format: 0 00:19:52.264 00:19:52.264 Discovery Log Entry 0 00:19:52.264 ---------------------- 00:19:52.264 Transport Type: 3 (TCP) 00:19:52.264 Address Family: 1 (IPv4) 00:19:52.264 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:52.264 Entry Flags: 00:19:52.264 Duplicate Returned Information: 1 00:19:52.264 Explicit Persistent Connection Support for Discovery: 1 00:19:52.264 Transport Requirements: 00:19:52.264 Secure Channel: Not Required 00:19:52.264 Port ID: 0 (0x0000) 00:19:52.264 Controller ID: 65535 (0xffff) 00:19:52.264 Admin Max SQ Size: 128 00:19:52.264 Transport Service Identifier: 4420 00:19:52.264 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:52.264 Transport Address: 10.0.0.2 00:19:52.264 Discovery Log Entry 1 00:19:52.264 ---------------------- 00:19:52.264 Transport Type: 3 (TCP) 00:19:52.264 Address Family: 1 (IPv4) 00:19:52.264 Subsystem Type: 2 (NVM Subsystem) 00:19:52.264 Entry Flags: 00:19:52.264 Duplicate Returned Information: 0 00:19:52.264 Explicit Persistent Connection Support for Discovery: 0 00:19:52.264 Transport Requirements: 00:19:52.264 Secure Channel: Not Required 00:19:52.264 Port ID: 0 (0x0000) 00:19:52.264 Controller ID: 65535 (0xffff) 00:19:52.264 Admin Max SQ Size: 128 00:19:52.264 Transport Service Identifier: 4420 00:19:52.264 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:19:52.264 Transport Address: 10.0.0.2 [2024-02-14 19:20:29.504773] nvme_ctrlr.c:4208:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:19:52.264 [2024-02-14 19:20:29.504795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.265 [2024-02-14 19:20:29.504803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.265 [2024-02-14 19:20:29.504809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.265 [2024-02-14 19:20:29.504815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.265 [2024-02-14 19:20:29.504841] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.504846] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.504850] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8410) 00:19:52.265 [2024-02-14 19:20:29.504859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.265 [2024-02-14 19:20:29.504890] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17c90, cid 3, qid 0 00:19:52.265 [2024-02-14 19:20:29.504951] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.265 [2024-02-14 19:20:29.504959] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.265 [2024-02-14 19:20:29.504963] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.504967] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17c90) on tqpair=0xcd8410 00:19:52.265 [2024-02-14 19:20:29.504975] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.504980] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.504983] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8410) 00:19:52.265 [2024-02-14 19:20:29.504991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.265 [2024-02-14 19:20:29.505035] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17c90, cid 3, qid 0 00:19:52.265 [2024-02-14 19:20:29.505102] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.265 [2024-02-14 19:20:29.505109] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.265 [2024-02-14 19:20:29.505113] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.505117] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17c90) on tqpair=0xcd8410 00:19:52.265 [2024-02-14 19:20:29.505123] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:19:52.265 [2024-02-14 19:20:29.505128] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:19:52.265 [2024-02-14 19:20:29.505138] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.505143] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.505147] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8410) 00:19:52.265 [2024-02-14 19:20:29.505154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.265 [2024-02-14 19:20:29.505177] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17c90, cid 3, qid 0 00:19:52.265 [2024-02-14 19:20:29.505231] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.265 [2024-02-14 19:20:29.505238] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.265 [2024-02-14 19:20:29.505242] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.505246] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17c90) on tqpair=0xcd8410 00:19:52.265 [2024-02-14 19:20:29.505257] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.505262] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.505266] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8410) 00:19:52.265 [2024-02-14 19:20:29.505273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.265 [2024-02-14 19:20:29.505297] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17c90, cid 3, qid 0 00:19:52.265 [2024-02-14 19:20:29.505361] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.265 [2024-02-14 19:20:29.505368] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.265 [2024-02-14 19:20:29.505372] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.505376] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17c90) on tqpair=0xcd8410 00:19:52.265 [2024-02-14 19:20:29.505387] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.505392] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.505396] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8410) 00:19:52.265 [2024-02-14 19:20:29.505403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.265 [2024-02-14 19:20:29.505426] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17c90, cid 3, qid 0 00:19:52.265 [2024-02-14 19:20:29.505477] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.265 [2024-02-14 19:20:29.505485] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.265 [2024-02-14 19:20:29.505511] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.505516] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17c90) on tqpair=0xcd8410 00:19:52.265 [2024-02-14 19:20:29.505529] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.505534] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.505538] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8410) 00:19:52.265 [2024-02-14 19:20:29.505545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.265 [2024-02-14 19:20:29.505572] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17c90, cid 3, qid 0 00:19:52.265 [2024-02-14 19:20:29.505623] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.265 [2024-02-14 19:20:29.505631] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.265 [2024-02-14 19:20:29.505635] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.505638] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17c90) on tqpair=0xcd8410 00:19:52.265 [2024-02-14 19:20:29.505650] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.505655] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.505659] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8410) 00:19:52.265 [2024-02-14 19:20:29.505666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.265 [2024-02-14 19:20:29.505706] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17c90, cid 3, qid 0 00:19:52.265 [2024-02-14 19:20:29.505758] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.265 [2024-02-14 19:20:29.505765] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.265 [2024-02-14 19:20:29.505769] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.505773] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17c90) on tqpair=0xcd8410 00:19:52.265 [2024-02-14 19:20:29.505784] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.505789] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.505792] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8410) 00:19:52.265 [2024-02-14 19:20:29.505799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.265 [2024-02-14 19:20:29.505822] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17c90, cid 3, qid 0 00:19:52.265 [2024-02-14 19:20:29.505884] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.265 [2024-02-14 19:20:29.505893] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.265 [2024-02-14 19:20:29.505898] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.505902] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17c90) on tqpair=0xcd8410 00:19:52.265 [2024-02-14 19:20:29.505913] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.505918] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.505922] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8410) 00:19:52.265 [2024-02-14 19:20:29.505929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.265 [2024-02-14 19:20:29.505952] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17c90, cid 3, qid 0 00:19:52.265 [2024-02-14 19:20:29.506002] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.265 [2024-02-14 19:20:29.506024] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.265 [2024-02-14 19:20:29.506030] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.506037] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17c90) on tqpair=0xcd8410 00:19:52.265 [2024-02-14 19:20:29.506049] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.506055] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.506059] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8410) 00:19:52.265 [2024-02-14 19:20:29.506066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.265 [2024-02-14 19:20:29.506091] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17c90, cid 3, qid 0 00:19:52.265 [2024-02-14 19:20:29.506142] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.265 [2024-02-14 19:20:29.506149] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.265 [2024-02-14 19:20:29.506153] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.506157] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17c90) on tqpair=0xcd8410 00:19:52.265 [2024-02-14 19:20:29.506168] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.506173] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.265 [2024-02-14 19:20:29.506177] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8410) 00:19:52.265 [2024-02-14 19:20:29.506184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.265 [2024-02-14 19:20:29.506207] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17c90, cid 3, qid 0 00:19:52.265 [2024-02-14 19:20:29.506262] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.266 [2024-02-14 19:20:29.506269] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.266 [2024-02-14 19:20:29.506273] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.266 [2024-02-14 19:20:29.506277] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17c90) on tqpair=0xcd8410 00:19:52.266 [2024-02-14 19:20:29.506288] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.266 [2024-02-14 19:20:29.506293] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.266 [2024-02-14 19:20:29.506297] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8410) 00:19:52.266 [2024-02-14 19:20:29.506304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.266 [2024-02-14 19:20:29.506327] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17c90, cid 3, qid 0 00:19:52.266 [2024-02-14 19:20:29.506390] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.266 [2024-02-14 19:20:29.506406] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.266 [2024-02-14 19:20:29.506412] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.266 [2024-02-14 19:20:29.506416] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17c90) on tqpair=0xcd8410 00:19:52.266 [2024-02-14 19:20:29.506428] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.266 [2024-02-14 19:20:29.506433] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.266 [2024-02-14 19:20:29.506437] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8410) 00:19:52.266 [2024-02-14 19:20:29.506445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.266 [2024-02-14 19:20:29.506469] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17c90, cid 3, qid 0 00:19:52.266 [2024-02-14 19:20:29.506550] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.266 [2024-02-14 19:20:29.506560] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.266 [2024-02-14 19:20:29.506564] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.266 [2024-02-14 19:20:29.506568] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17c90) on tqpair=0xcd8410 00:19:52.266 [2024-02-14 19:20:29.506579] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.266 [2024-02-14 19:20:29.506585] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.266 [2024-02-14 19:20:29.506588] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8410) 00:19:52.266 [2024-02-14 19:20:29.506596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.266 [2024-02-14 19:20:29.506623] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17c90, cid 3, qid 0 00:19:52.266 [2024-02-14 19:20:29.506676] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.266 [2024-02-14 19:20:29.506684] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.266 [2024-02-14 19:20:29.506687] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.266 [2024-02-14 19:20:29.506691] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17c90) on tqpair=0xcd8410 00:19:52.266 [2024-02-14 19:20:29.506703] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.266 [2024-02-14 19:20:29.506708] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.266 [2024-02-14 19:20:29.506711] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8410) 00:19:52.266 [2024-02-14 19:20:29.506719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.266 [2024-02-14 19:20:29.506742] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17c90, cid 3, qid 0 00:19:52.266 [2024-02-14 19:20:29.506820] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.266 [2024-02-14 19:20:29.506829] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.266 [2024-02-14 19:20:29.506833] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.266 [2024-02-14 19:20:29.506837] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17c90) on tqpair=0xcd8410 00:19:52.266 [2024-02-14 19:20:29.506848] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.266 [2024-02-14 19:20:29.506854] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.266 [2024-02-14 19:20:29.506858] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8410) 00:19:52.266 [2024-02-14 19:20:29.506865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.266 [2024-02-14 19:20:29.506897] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17c90, cid 3, qid 0 00:19:52.266 [2024-02-14 19:20:29.506962] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.266 [2024-02-14 19:20:29.506971] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.266 [2024-02-14 19:20:29.506975] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.266 [2024-02-14 19:20:29.506979] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17c90) on tqpair=0xcd8410 00:19:52.266 [2024-02-14 19:20:29.507002] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.266 [2024-02-14 19:20:29.507007] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.266 [2024-02-14 19:20:29.507011] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8410) 00:19:52.266 [2024-02-14 19:20:29.507018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.266 [2024-02-14 19:20:29.507041] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17c90, cid 3, qid 0 00:19:52.266 [2024-02-14 19:20:29.507094] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.267 [2024-02-14 19:20:29.507102] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.267 [2024-02-14 19:20:29.507107] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.267 [2024-02-14 19:20:29.507111] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17c90) on tqpair=0xcd8410 00:19:52.267 [2024-02-14 19:20:29.507122] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.267 [2024-02-14 19:20:29.507127] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.267 [2024-02-14 19:20:29.507131] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8410) 00:19:52.267 [2024-02-14 19:20:29.507138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.267 [2024-02-14 19:20:29.507161] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17c90, cid 3, qid 0 00:19:52.267 [2024-02-14 19:20:29.507243] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.267 [2024-02-14 19:20:29.507251] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.267 [2024-02-14 19:20:29.507256] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.267 [2024-02-14 19:20:29.507260] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17c90) on tqpair=0xcd8410 00:19:52.267 [2024-02-14 19:20:29.507271] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.267 [2024-02-14 19:20:29.507276] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.267 [2024-02-14 19:20:29.507280] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8410) 00:19:52.267 [2024-02-14 19:20:29.507287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.267 [2024-02-14 19:20:29.507310] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17c90, cid 3, qid 0 00:19:52.267 [2024-02-14 19:20:29.507360] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.267 [2024-02-14 19:20:29.507368] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.267 [2024-02-14 19:20:29.507372] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.267 [2024-02-14 19:20:29.507376] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17c90) on tqpair=0xcd8410 00:19:52.267 [2024-02-14 19:20:29.507387] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.267 [2024-02-14 19:20:29.507392] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.267 [2024-02-14 19:20:29.507396] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8410) 00:19:52.267 [2024-02-14 19:20:29.507403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.267 [2024-02-14 19:20:29.507425] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17c90, cid 3, qid 0 00:19:52.267 [2024-02-14 19:20:29.507475] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.267 [2024-02-14 19:20:29.507482] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.267 [2024-02-14 19:20:29.507486] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.267 [2024-02-14 19:20:29.511534] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17c90) on tqpair=0xcd8410 00:19:52.267 [2024-02-14 19:20:29.511552] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.267 [2024-02-14 19:20:29.511558] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.267 [2024-02-14 19:20:29.511562] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd8410) 00:19:52.267 [2024-02-14 19:20:29.511570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.267 [2024-02-14 19:20:29.511598] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd17c90, cid 3, qid 0 00:19:52.267 [2024-02-14 19:20:29.511690] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.267 [2024-02-14 19:20:29.511698] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.267 [2024-02-14 19:20:29.511701] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.267 [2024-02-14 19:20:29.511705] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd17c90) on tqpair=0xcd8410 00:19:52.267 [2024-02-14 19:20:29.511714] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:19:52.267 00:19:52.267 19:20:29 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:19:52.267 [2024-02-14 19:20:29.548841] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:19:52.267 [2024-02-14 19:20:29.548885] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81442 ] 00:19:52.529 [2024-02-14 19:20:29.686943] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:19:52.529 [2024-02-14 19:20:29.687030] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:52.529 [2024-02-14 19:20:29.687037] nvme_tcp.c:2246:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:52.529 [2024-02-14 19:20:29.687051] nvme_tcp.c:2264:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:52.529 [2024-02-14 19:20:29.687062] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:52.529 [2024-02-14 19:20:29.687180] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:19:52.529 [2024-02-14 19:20:29.687224] nvme_tcp.c:1485:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x676410 0 00:19:52.529 [2024-02-14 19:20:29.702512] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:52.529 [2024-02-14 19:20:29.702536] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:52.529 [2024-02-14 19:20:29.702542] nvme_tcp.c:1531:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:52.529 [2024-02-14 19:20:29.702545] nvme_tcp.c:1532:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:52.529 [2024-02-14 19:20:29.702585] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.529 [2024-02-14 19:20:29.702593] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.529 [2024-02-14 19:20:29.702597] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x676410) 00:19:52.529 [2024-02-14 19:20:29.702609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:52.529 [2024-02-14 19:20:29.702641] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5870, cid 0, qid 0 00:19:52.529 [2024-02-14 19:20:29.710509] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.529 [2024-02-14 19:20:29.710530] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.529 [2024-02-14 19:20:29.710536] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.529 [2024-02-14 19:20:29.710540] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b5870) on tqpair=0x676410 00:19:52.529 [2024-02-14 19:20:29.710551] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:52.529 [2024-02-14 19:20:29.710558] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:19:52.529 [2024-02-14 19:20:29.710565] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:19:52.529 [2024-02-14 19:20:29.710581] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.530 [2024-02-14 19:20:29.710587] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.530 [2024-02-14 19:20:29.710591] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x676410) 00:19:52.530 [2024-02-14 19:20:29.710600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.530 [2024-02-14 19:20:29.710630] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5870, cid 0, qid 0 00:19:52.530 [2024-02-14 19:20:29.710723] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.530 [2024-02-14 19:20:29.710730] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.530 [2024-02-14 19:20:29.710734] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.530 [2024-02-14 19:20:29.710738] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b5870) on tqpair=0x676410 00:19:52.530 [2024-02-14 19:20:29.710747] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:19:52.530 [2024-02-14 19:20:29.710756] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:19:52.530 [2024-02-14 19:20:29.710765] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.530 [2024-02-14 19:20:29.710769] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.530 [2024-02-14 19:20:29.710773] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x676410) 00:19:52.530 [2024-02-14 19:20:29.710803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.530 [2024-02-14 19:20:29.710838] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5870, cid 0, qid 0 00:19:52.530 [2024-02-14 19:20:29.711231] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.530 [2024-02-14 19:20:29.711247] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.530 [2024-02-14 19:20:29.711252] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.530 [2024-02-14 19:20:29.711256] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b5870) on tqpair=0x676410 00:19:52.530 [2024-02-14 19:20:29.711262] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:19:52.530 [2024-02-14 19:20:29.711271] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:19:52.530 [2024-02-14 19:20:29.711280] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.530 [2024-02-14 19:20:29.711285] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.530 [2024-02-14 19:20:29.711289] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x676410) 00:19:52.530 [2024-02-14 19:20:29.711296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.530 [2024-02-14 19:20:29.711320] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5870, cid 0, qid 0 00:19:52.530 [2024-02-14 19:20:29.711506] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.530 [2024-02-14 19:20:29.711520] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.530 [2024-02-14 19:20:29.711525] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.530 [2024-02-14 19:20:29.711529] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b5870) on tqpair=0x676410 00:19:52.530 [2024-02-14 19:20:29.711535] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:52.530 [2024-02-14 19:20:29.711547] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.530 [2024-02-14 19:20:29.711552] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.530 [2024-02-14 19:20:29.711555] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x676410) 00:19:52.530 [2024-02-14 19:20:29.711563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.530 [2024-02-14 19:20:29.711584] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5870, cid 0, qid 0 00:19:52.530 [2024-02-14 19:20:29.712126] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.530 [2024-02-14 19:20:29.712141] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.530 [2024-02-14 19:20:29.712146] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.530 [2024-02-14 19:20:29.712150] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b5870) on tqpair=0x676410 00:19:52.530 [2024-02-14 19:20:29.712155] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:19:52.530 [2024-02-14 19:20:29.712160] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:19:52.530 [2024-02-14 19:20:29.712169] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:52.530 [2024-02-14 19:20:29.712275] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:19:52.530 [2024-02-14 19:20:29.712280] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:52.530 [2024-02-14 19:20:29.712288] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.530 [2024-02-14 19:20:29.712293] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.530 [2024-02-14 19:20:29.712296] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x676410) 00:19:52.530 [2024-02-14 19:20:29.712303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.530 [2024-02-14 19:20:29.712328] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5870, cid 0, qid 0 00:19:52.530 [2024-02-14 19:20:29.712778] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.530 [2024-02-14 19:20:29.712794] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.530 [2024-02-14 19:20:29.712799] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.530 [2024-02-14 19:20:29.712803] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b5870) on tqpair=0x676410 00:19:52.530 [2024-02-14 19:20:29.712808] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:52.530 [2024-02-14 19:20:29.712819] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.530 [2024-02-14 19:20:29.712824] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.530 [2024-02-14 19:20:29.712827] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x676410) 00:19:52.530 [2024-02-14 19:20:29.712835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.530 [2024-02-14 19:20:29.712860] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5870, cid 0, qid 0 00:19:52.530 [2024-02-14 19:20:29.712941] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.530 [2024-02-14 19:20:29.712948] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.530 [2024-02-14 19:20:29.712952] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.530 [2024-02-14 19:20:29.712956] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b5870) on tqpair=0x676410 00:19:52.530 [2024-02-14 19:20:29.712960] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:52.530 [2024-02-14 19:20:29.712966] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:19:52.530 [2024-02-14 19:20:29.712974] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:19:52.530 [2024-02-14 19:20:29.712984] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:19:52.530 [2024-02-14 19:20:29.712994] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.530 [2024-02-14 19:20:29.712999] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.530 [2024-02-14 19:20:29.713003] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x676410) 00:19:52.530 [2024-02-14 19:20:29.713010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.530 [2024-02-14 19:20:29.713035] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5870, cid 0, qid 0 00:19:52.530 [2024-02-14 19:20:29.713503] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:52.530 [2024-02-14 19:20:29.713520] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:52.530 [2024-02-14 19:20:29.713525] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:52.530 [2024-02-14 19:20:29.713529] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x676410): datao=0, datal=4096, cccid=0 00:19:52.530 [2024-02-14 19:20:29.713534] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6b5870) on tqpair(0x676410): expected_datao=0, payload_size=4096 00:19:52.530 [2024-02-14 19:20:29.713542] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:52.530 [2024-02-14 19:20:29.713546] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:52.530 [2024-02-14 19:20:29.713555] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.530 [2024-02-14 19:20:29.713561] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.530 [2024-02-14 19:20:29.713565] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.530 [2024-02-14 19:20:29.713569] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b5870) on tqpair=0x676410 00:19:52.530 [2024-02-14 19:20:29.713577] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:19:52.530 [2024-02-14 19:20:29.713587] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:19:52.530 [2024-02-14 19:20:29.713600] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:19:52.530 [2024-02-14 19:20:29.713604] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:19:52.530 [2024-02-14 19:20:29.713610] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:19:52.530 [2024-02-14 19:20:29.713615] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:19:52.530 [2024-02-14 19:20:29.713625] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:19:52.530 [2024-02-14 19:20:29.713633] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.530 [2024-02-14 19:20:29.713637] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.530 [2024-02-14 19:20:29.713641] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x676410) 00:19:52.530 [2024-02-14 19:20:29.713648] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:52.530 [2024-02-14 19:20:29.713675] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5870, cid 0, qid 0 00:19:52.530 [2024-02-14 19:20:29.714121] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.530 [2024-02-14 19:20:29.714136] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.530 [2024-02-14 19:20:29.714141] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.714145] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b5870) on tqpair=0x676410 00:19:52.531 [2024-02-14 19:20:29.714152] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.714156] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.714160] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x676410) 00:19:52.531 [2024-02-14 19:20:29.714167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.531 [2024-02-14 19:20:29.714182] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.714186] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.714189] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x676410) 00:19:52.531 [2024-02-14 19:20:29.714195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.531 [2024-02-14 19:20:29.714201] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.714204] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.714208] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x676410) 00:19:52.531 [2024-02-14 19:20:29.714213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.531 [2024-02-14 19:20:29.714219] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.714223] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.714226] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x676410) 00:19:52.531 [2024-02-14 19:20:29.714231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.531 [2024-02-14 19:20:29.714236] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:52.531 [2024-02-14 19:20:29.714250] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:52.531 [2024-02-14 19:20:29.714258] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.714262] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.714266] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x676410) 00:19:52.531 [2024-02-14 19:20:29.714272] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.531 [2024-02-14 19:20:29.714310] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5870, cid 0, qid 0 00:19:52.531 [2024-02-14 19:20:29.714318] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b59d0, cid 1, qid 0 00:19:52.531 [2024-02-14 19:20:29.714322] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5b30, cid 2, qid 0 00:19:52.531 [2024-02-14 19:20:29.714327] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5c90, cid 3, qid 0 00:19:52.531 [2024-02-14 19:20:29.714331] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5df0, cid 4, qid 0 00:19:52.531 [2024-02-14 19:20:29.718516] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.531 [2024-02-14 19:20:29.718536] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.531 [2024-02-14 19:20:29.718542] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.718546] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b5df0) on tqpair=0x676410 00:19:52.531 [2024-02-14 19:20:29.718552] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:19:52.531 [2024-02-14 19:20:29.718558] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:52.531 [2024-02-14 19:20:29.718568] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:19:52.531 [2024-02-14 19:20:29.718575] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:52.531 [2024-02-14 19:20:29.718583] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.718587] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.718591] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x676410) 00:19:52.531 [2024-02-14 19:20:29.718599] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:52.531 [2024-02-14 19:20:29.718628] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5df0, cid 4, qid 0 00:19:52.531 [2024-02-14 19:20:29.718721] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.531 [2024-02-14 19:20:29.718729] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.531 [2024-02-14 19:20:29.718733] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.718736] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b5df0) on tqpair=0x676410 00:19:52.531 [2024-02-14 19:20:29.718791] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:19:52.531 [2024-02-14 19:20:29.718806] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:52.531 [2024-02-14 19:20:29.718816] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.718820] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.718827] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x676410) 00:19:52.531 [2024-02-14 19:20:29.718834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.531 [2024-02-14 19:20:29.718860] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5df0, cid 4, qid 0 00:19:52.531 [2024-02-14 19:20:29.719294] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:52.531 [2024-02-14 19:20:29.719310] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:52.531 [2024-02-14 19:20:29.719315] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.719319] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x676410): datao=0, datal=4096, cccid=4 00:19:52.531 [2024-02-14 19:20:29.719324] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6b5df0) on tqpair(0x676410): expected_datao=0, payload_size=4096 00:19:52.531 [2024-02-14 19:20:29.719332] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.719336] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.719345] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.531 [2024-02-14 19:20:29.719351] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.531 [2024-02-14 19:20:29.719355] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.719359] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b5df0) on tqpair=0x676410 00:19:52.531 [2024-02-14 19:20:29.719377] nvme_ctrlr.c:4544:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:19:52.531 [2024-02-14 19:20:29.719392] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:19:52.531 [2024-02-14 19:20:29.719405] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:19:52.531 [2024-02-14 19:20:29.719414] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.719418] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.719422] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x676410) 00:19:52.531 [2024-02-14 19:20:29.719429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.531 [2024-02-14 19:20:29.719456] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5df0, cid 4, qid 0 00:19:52.531 [2024-02-14 19:20:29.719922] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:52.531 [2024-02-14 19:20:29.719941] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:52.531 [2024-02-14 19:20:29.719946] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.719950] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x676410): datao=0, datal=4096, cccid=4 00:19:52.531 [2024-02-14 19:20:29.719955] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6b5df0) on tqpair(0x676410): expected_datao=0, payload_size=4096 00:19:52.531 [2024-02-14 19:20:29.719962] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.719966] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.719976] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.531 [2024-02-14 19:20:29.719982] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.531 [2024-02-14 19:20:29.719986] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.719990] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b5df0) on tqpair=0x676410 00:19:52.531 [2024-02-14 19:20:29.720008] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:52.531 [2024-02-14 19:20:29.720022] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:52.531 [2024-02-14 19:20:29.720031] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.720036] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.720039] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x676410) 00:19:52.531 [2024-02-14 19:20:29.720047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.531 [2024-02-14 19:20:29.720073] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5df0, cid 4, qid 0 00:19:52.531 [2024-02-14 19:20:29.720463] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:52.531 [2024-02-14 19:20:29.720478] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:52.531 [2024-02-14 19:20:29.720483] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.720503] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x676410): datao=0, datal=4096, cccid=4 00:19:52.531 [2024-02-14 19:20:29.720510] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6b5df0) on tqpair(0x676410): expected_datao=0, payload_size=4096 00:19:52.531 [2024-02-14 19:20:29.720518] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.720522] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:52.531 [2024-02-14 19:20:29.720531] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.531 [2024-02-14 19:20:29.720537] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.531 [2024-02-14 19:20:29.720541] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.720545] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b5df0) on tqpair=0x676410 00:19:52.532 [2024-02-14 19:20:29.720554] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:52.532 [2024-02-14 19:20:29.720564] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:19:52.532 [2024-02-14 19:20:29.720576] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:19:52.532 [2024-02-14 19:20:29.720582] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:52.532 [2024-02-14 19:20:29.720588] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:19:52.532 [2024-02-14 19:20:29.720593] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:19:52.532 [2024-02-14 19:20:29.720598] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:19:52.532 [2024-02-14 19:20:29.720603] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:19:52.532 [2024-02-14 19:20:29.720658] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.720670] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.720674] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x676410) 00:19:52.532 [2024-02-14 19:20:29.720681] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.532 [2024-02-14 19:20:29.720689] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.720693] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.720696] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x676410) 00:19:52.532 [2024-02-14 19:20:29.720702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.532 [2024-02-14 19:20:29.720741] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5df0, cid 4, qid 0 00:19:52.532 [2024-02-14 19:20:29.720750] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5f50, cid 5, qid 0 00:19:52.532 [2024-02-14 19:20:29.721258] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.532 [2024-02-14 19:20:29.721275] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.532 [2024-02-14 19:20:29.721280] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.721284] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b5df0) on tqpair=0x676410 00:19:52.532 [2024-02-14 19:20:29.721291] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.532 [2024-02-14 19:20:29.721297] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.532 [2024-02-14 19:20:29.721301] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.721304] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b5f50) on tqpair=0x676410 00:19:52.532 [2024-02-14 19:20:29.721315] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.721320] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.721324] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x676410) 00:19:52.532 [2024-02-14 19:20:29.721331] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.532 [2024-02-14 19:20:29.721364] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5f50, cid 5, qid 0 00:19:52.532 [2024-02-14 19:20:29.721446] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.532 [2024-02-14 19:20:29.721454] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.532 [2024-02-14 19:20:29.721458] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.721461] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b5f50) on tqpair=0x676410 00:19:52.532 [2024-02-14 19:20:29.721472] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.721476] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.721480] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x676410) 00:19:52.532 [2024-02-14 19:20:29.721500] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.532 [2024-02-14 19:20:29.721526] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5f50, cid 5, qid 0 00:19:52.532 [2024-02-14 19:20:29.721944] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.532 [2024-02-14 19:20:29.721960] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.532 [2024-02-14 19:20:29.721965] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.721969] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b5f50) on tqpair=0x676410 00:19:52.532 [2024-02-14 19:20:29.721980] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.721985] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.721989] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x676410) 00:19:52.532 [2024-02-14 19:20:29.721996] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.532 [2024-02-14 19:20:29.722020] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5f50, cid 5, qid 0 00:19:52.532 [2024-02-14 19:20:29.722101] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.532 [2024-02-14 19:20:29.722108] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.532 [2024-02-14 19:20:29.722112] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.722116] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b5f50) on tqpair=0x676410 00:19:52.532 [2024-02-14 19:20:29.722130] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.722136] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.722140] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x676410) 00:19:52.532 [2024-02-14 19:20:29.722147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.532 [2024-02-14 19:20:29.722154] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.722159] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.722162] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x676410) 00:19:52.532 [2024-02-14 19:20:29.722168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.532 [2024-02-14 19:20:29.722176] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.722180] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.722184] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x676410) 00:19:52.532 [2024-02-14 19:20:29.722189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.532 [2024-02-14 19:20:29.722197] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.722201] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.722205] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x676410) 00:19:52.532 [2024-02-14 19:20:29.722211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.532 [2024-02-14 19:20:29.722236] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5f50, cid 5, qid 0 00:19:52.532 [2024-02-14 19:20:29.722243] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5df0, cid 4, qid 0 00:19:52.532 [2024-02-14 19:20:29.722248] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b60b0, cid 6, qid 0 00:19:52.532 [2024-02-14 19:20:29.722252] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b6210, cid 7, qid 0 00:19:52.532 [2024-02-14 19:20:29.726515] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:52.532 [2024-02-14 19:20:29.726535] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:52.532 [2024-02-14 19:20:29.726540] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.726544] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x676410): datao=0, datal=8192, cccid=5 00:19:52.532 [2024-02-14 19:20:29.726549] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6b5f50) on tqpair(0x676410): expected_datao=0, payload_size=8192 00:19:52.532 [2024-02-14 19:20:29.726556] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.726560] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.726566] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:52.532 [2024-02-14 19:20:29.726571] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:52.532 [2024-02-14 19:20:29.726574] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.726578] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x676410): datao=0, datal=512, cccid=4 00:19:52.532 [2024-02-14 19:20:29.726582] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6b5df0) on tqpair(0x676410): expected_datao=0, payload_size=512 00:19:52.532 [2024-02-14 19:20:29.726588] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.726592] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.726597] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:52.532 [2024-02-14 19:20:29.726602] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:52.532 [2024-02-14 19:20:29.726605] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.726609] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x676410): datao=0, datal=512, cccid=6 00:19:52.532 [2024-02-14 19:20:29.726613] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6b60b0) on tqpair(0x676410): expected_datao=0, payload_size=512 00:19:52.532 [2024-02-14 19:20:29.726619] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.726623] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.726628] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:52.532 [2024-02-14 19:20:29.726633] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:52.532 [2024-02-14 19:20:29.726637] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:52.532 [2024-02-14 19:20:29.726640] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x676410): datao=0, datal=4096, cccid=7 00:19:52.532 [2024-02-14 19:20:29.726645] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6b6210) on tqpair(0x676410): expected_datao=0, payload_size=4096 00:19:52.532 [2024-02-14 19:20:29.726651] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:52.533 [2024-02-14 19:20:29.726655] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:52.533 [2024-02-14 19:20:29.726660] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.533 [2024-02-14 19:20:29.726665] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.533 [2024-02-14 19:20:29.726668] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.533 [2024-02-14 19:20:29.726672] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b5f50) on tqpair=0x676410 00:19:52.533 ===================================================== 00:19:52.533 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:52.533 ===================================================== 00:19:52.533 Controller Capabilities/Features 00:19:52.533 ================================ 00:19:52.533 Vendor ID: 8086 00:19:52.533 Subsystem Vendor ID: 8086 00:19:52.533 Serial Number: SPDK00000000000001 00:19:52.533 Model Number: SPDK bdev Controller 00:19:52.533 Firmware Version: 24.05 00:19:52.533 Recommended Arb Burst: 6 00:19:52.533 IEEE OUI Identifier: e4 d2 5c 00:19:52.533 Multi-path I/O 00:19:52.533 May have multiple subsystem ports: Yes 00:19:52.533 May have multiple controllers: Yes 00:19:52.533 Associated with SR-IOV VF: No 00:19:52.533 Max Data Transfer Size: 131072 00:19:52.533 Max Number of Namespaces: 32 00:19:52.533 Max Number of I/O Queues: 127 00:19:52.533 NVMe Specification Version (VS): 1.3 00:19:52.533 NVMe Specification Version (Identify): 1.3 00:19:52.533 Maximum Queue Entries: 128 00:19:52.533 Contiguous Queues Required: Yes 00:19:52.533 Arbitration Mechanisms Supported 00:19:52.533 Weighted Round Robin: Not Supported 00:19:52.533 Vendor Specific: Not Supported 00:19:52.533 Reset Timeout: 15000 ms 00:19:52.533 Doorbell Stride: 4 bytes 00:19:52.533 NVM Subsystem Reset: Not Supported 00:19:52.533 Command Sets Supported 00:19:52.533 NVM Command Set: Supported 00:19:52.533 Boot Partition: Not Supported 00:19:52.533 Memory Page Size Minimum: 4096 bytes 00:19:52.533 Memory Page Size Maximum: 4096 bytes 00:19:52.533 Persistent Memory Region: Not Supported 00:19:52.533 Optional Asynchronous Events Supported 00:19:52.533 Namespace Attribute Notices: Supported 00:19:52.533 Firmware Activation Notices: Not Supported 00:19:52.533 ANA Change Notices: Not Supported 00:19:52.533 PLE Aggregate Log Change Notices: Not Supported 00:19:52.533 LBA Status Info Alert Notices: Not Supported 00:19:52.533 EGE Aggregate Log Change Notices: Not Supported 00:19:52.533 Normal NVM Subsystem Shutdown event: Not Supported 00:19:52.533 Zone Descriptor Change Notices: Not Supported 00:19:52.533 Discovery Log Change Notices: Not Supported 00:19:52.533 Controller Attributes 00:19:52.533 128-bit Host Identifier: Supported 00:19:52.533 Non-Operational Permissive Mode: Not Supported 00:19:52.533 NVM Sets: Not Supported 00:19:52.533 Read Recovery Levels: Not Supported 00:19:52.533 Endurance Groups: Not Supported 00:19:52.533 Predictable Latency Mode: Not Supported 00:19:52.533 Traffic Based Keep ALive: Not Supported 00:19:52.533 Namespace Granularity: Not Supported 00:19:52.533 SQ Associations: Not Supported 00:19:52.533 UUID List: Not Supported 00:19:52.533 Multi-Domain Subsystem: Not Supported 00:19:52.533 Fixed Capacity Management: Not Supported 00:19:52.533 Variable Capacity Management: Not Supported 00:19:52.533 Delete Endurance Group: Not Supported 00:19:52.533 Delete NVM Set: Not Supported 00:19:52.533 Extended LBA Formats Supported: Not Supported 00:19:52.533 Flexible Data Placement Supported: Not Supported 00:19:52.533 00:19:52.533 Controller Memory Buffer Support 00:19:52.533 ================================ 00:19:52.533 Supported: No 00:19:52.533 00:19:52.533 Persistent Memory Region Support 00:19:52.533 ================================ 00:19:52.533 Supported: No 00:19:52.533 00:19:52.533 Admin Command Set Attributes 00:19:52.533 ============================ 00:19:52.533 Security Send/Receive: Not Supported 00:19:52.533 Format NVM: Not Supported 00:19:52.533 Firmware Activate/Download: Not Supported 00:19:52.533 Namespace Management: Not Supported 00:19:52.533 Device Self-Test: Not Supported 00:19:52.533 Directives: Not Supported 00:19:52.533 NVMe-MI: Not Supported 00:19:52.533 Virtualization Management: Not Supported 00:19:52.533 Doorbell Buffer Config: Not Supported 00:19:52.533 Get LBA Status Capability: Not Supported 00:19:52.533 Command & Feature Lockdown Capability: Not Supported 00:19:52.533 Abort Command Limit: 4 00:19:52.533 Async Event Request Limit: 4 00:19:52.533 Number of Firmware Slots: N/A 00:19:52.533 Firmware Slot 1 Read-Only: N/A 00:19:52.533 Firmware Activation Without Reset: N/A 00:19:52.533 Multiple Update Detection Support: N/A 00:19:52.533 Firmware Update Granularity: No Information Provided 00:19:52.533 Per-Namespace SMART Log: No 00:19:52.533 Asymmetric Namespace Access Log Page: Not Supported 00:19:52.533 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:19:52.533 Command Effects Log Page: Supported 00:19:52.533 Get Log Page Extended Data: Supported 00:19:52.533 Telemetry Log Pages: Not Supported 00:19:52.533 Persistent Event Log Pages: Not Supported 00:19:52.533 Supported Log Pages Log Page: May Support 00:19:52.533 Commands Supported & Effects Log Page: Not Supported 00:19:52.533 Feature Identifiers & Effects Log Page:May Support 00:19:52.533 NVMe-MI Commands & Effects Log Page: May Support 00:19:52.533 Data Area 4 for Telemetry Log: Not Supported 00:19:52.533 Error Log Page Entries Supported: 128 00:19:52.533 Keep Alive: Supported 00:19:52.533 Keep Alive Granularity: 10000 ms 00:19:52.533 00:19:52.533 NVM Command Set Attributes 00:19:52.533 ========================== 00:19:52.533 Submission Queue Entry Size 00:19:52.533 Max: 64 00:19:52.533 Min: 64 00:19:52.533 Completion Queue Entry Size 00:19:52.533 Max: 16 00:19:52.533 Min: 16 00:19:52.533 Number of Namespaces: 32 00:19:52.533 Compare Command: Supported 00:19:52.533 Write Uncorrectable Command: Not Supported 00:19:52.533 Dataset Management Command: Supported 00:19:52.533 Write Zeroes Command: Supported 00:19:52.533 Set Features Save Field: Not Supported 00:19:52.533 Reservations: Supported 00:19:52.533 Timestamp: Not Supported 00:19:52.533 Copy: Supported 00:19:52.533 Volatile Write Cache: Present 00:19:52.533 Atomic Write Unit (Normal): 1 00:19:52.533 Atomic Write Unit (PFail): 1 00:19:52.533 Atomic Compare & Write Unit: 1 00:19:52.533 Fused Compare & Write: Supported 00:19:52.533 Scatter-Gather List 00:19:52.533 SGL Command Set: Supported 00:19:52.533 SGL Keyed: Supported 00:19:52.533 SGL Bit Bucket Descriptor: Not Supported 00:19:52.533 SGL Metadata Pointer: Not Supported 00:19:52.533 Oversized SGL: Not Supported 00:19:52.533 SGL Metadata Address: Not Supported 00:19:52.533 SGL Offset: Supported 00:19:52.533 Transport SGL Data Block: Not Supported 00:19:52.533 Replay Protected Memory Block: Not Supported 00:19:52.533 00:19:52.533 Firmware Slot Information 00:19:52.533 ========================= 00:19:52.533 Active slot: 1 00:19:52.533 Slot 1 Firmware Revision: 24.05 00:19:52.533 00:19:52.533 00:19:52.533 Commands Supported and Effects 00:19:52.533 ============================== 00:19:52.533 Admin Commands 00:19:52.533 -------------- 00:19:52.533 Get Log Page (02h): Supported 00:19:52.533 Identify (06h): Supported 00:19:52.533 Abort (08h): Supported 00:19:52.533 Set Features (09h): Supported 00:19:52.533 Get Features (0Ah): Supported 00:19:52.533 Asynchronous Event Request (0Ch): Supported 00:19:52.533 Keep Alive (18h): Supported 00:19:52.533 I/O Commands 00:19:52.533 ------------ 00:19:52.533 Flush (00h): Supported LBA-Change 00:19:52.533 Write (01h): Supported LBA-Change 00:19:52.533 Read (02h): Supported 00:19:52.533 Compare (05h): Supported 00:19:52.533 Write Zeroes (08h): Supported LBA-Change 00:19:52.533 Dataset Management (09h): Supported LBA-Change 00:19:52.533 Copy (19h): Supported LBA-Change 00:19:52.533 Unknown (79h): Supported LBA-Change 00:19:52.533 Unknown (7Ah): Supported 00:19:52.533 00:19:52.533 Error Log 00:19:52.533 ========= 00:19:52.533 00:19:52.533 Arbitration 00:19:52.533 =========== 00:19:52.533 Arbitration Burst: 1 00:19:52.533 00:19:52.533 Power Management 00:19:52.533 ================ 00:19:52.533 Number of Power States: 1 00:19:52.533 Current Power State: Power State #0 00:19:52.533 Power State #0: 00:19:52.533 Max Power: 0.00 W 00:19:52.533 Non-Operational State: Operational 00:19:52.533 Entry Latency: Not Reported 00:19:52.533 Exit Latency: Not Reported 00:19:52.533 Relative Read Throughput: 0 00:19:52.533 Relative Read Latency: 0 00:19:52.533 Relative Write Throughput: 0 00:19:52.533 Relative Write Latency: 0 00:19:52.533 Idle Power: Not Reported 00:19:52.533 Active Power: Not Reported 00:19:52.533 Non-Operational Permissive Mode: Not Supported 00:19:52.533 00:19:52.534 Health Information 00:19:52.534 ================== 00:19:52.534 Critical Warnings: 00:19:52.534 Available Spare Space: OK 00:19:52.534 Temperature: OK 00:19:52.534 Device Reliability: OK 00:19:52.534 Read Only: No 00:19:52.534 Volatile Memory Backup: OK 00:19:52.534 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:52.534 Temperature Threshold: [2024-02-14 19:20:29.726689] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.534 [2024-02-14 19:20:29.726697] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.534 [2024-02-14 19:20:29.726700] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.534 [2024-02-14 19:20:29.726704] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b5df0) on tqpair=0x676410 00:19:52.534 [2024-02-14 19:20:29.726714] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.534 [2024-02-14 19:20:29.726721] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.534 [2024-02-14 19:20:29.726724] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.534 [2024-02-14 19:20:29.726728] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b60b0) on tqpair=0x676410 00:19:52.534 [2024-02-14 19:20:29.726735] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.534 [2024-02-14 19:20:29.726741] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.534 [2024-02-14 19:20:29.726745] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.534 [2024-02-14 19:20:29.726748] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b6210) on tqpair=0x676410 00:19:52.534 [2024-02-14 19:20:29.726867] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.534 [2024-02-14 19:20:29.726876] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.534 [2024-02-14 19:20:29.726879] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x676410) 00:19:52.534 [2024-02-14 19:20:29.726887] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.534 [2024-02-14 19:20:29.726918] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b6210, cid 7, qid 0 00:19:52.534 [2024-02-14 19:20:29.727553] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.534 [2024-02-14 19:20:29.727572] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.534 [2024-02-14 19:20:29.727577] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.534 [2024-02-14 19:20:29.727581] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b6210) on tqpair=0x676410 00:19:52.534 [2024-02-14 19:20:29.727625] nvme_ctrlr.c:4208:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:19:52.534 [2024-02-14 19:20:29.727640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.534 [2024-02-14 19:20:29.727648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.534 [2024-02-14 19:20:29.727654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.534 [2024-02-14 19:20:29.727659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.534 [2024-02-14 19:20:29.727668] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.534 [2024-02-14 19:20:29.727672] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.534 [2024-02-14 19:20:29.727676] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x676410) 00:19:52.534 [2024-02-14 19:20:29.727683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.534 [2024-02-14 19:20:29.727711] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5c90, cid 3, qid 0 00:19:52.534 [2024-02-14 19:20:29.728099] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.534 [2024-02-14 19:20:29.728114] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.534 [2024-02-14 19:20:29.728119] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.534 [2024-02-14 19:20:29.728123] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b5c90) on tqpair=0x676410 00:19:52.534 [2024-02-14 19:20:29.728131] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.534 [2024-02-14 19:20:29.728135] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.534 [2024-02-14 19:20:29.728139] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x676410) 00:19:52.534 [2024-02-14 19:20:29.728146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.534 [2024-02-14 19:20:29.728175] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5c90, cid 3, qid 0 00:19:52.534 [2024-02-14 19:20:29.728383] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.534 [2024-02-14 19:20:29.728390] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.534 [2024-02-14 19:20:29.728393] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.534 [2024-02-14 19:20:29.728397] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b5c90) on tqpair=0x676410 00:19:52.534 [2024-02-14 19:20:29.728401] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:19:52.534 [2024-02-14 19:20:29.728406] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:19:52.534 [2024-02-14 19:20:29.728416] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.534 [2024-02-14 19:20:29.728421] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.534 [2024-02-14 19:20:29.728425] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x676410) 00:19:52.534 [2024-02-14 19:20:29.728432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.534 [2024-02-14 19:20:29.728454] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5c90, cid 3, qid 0 00:19:52.534 [2024-02-14 19:20:29.729020] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.534 [2024-02-14 19:20:29.729037] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.534 [2024-02-14 19:20:29.729042] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.534 [2024-02-14 19:20:29.729046] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b5c90) on tqpair=0x676410 00:19:52.534 [2024-02-14 19:20:29.729057] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.534 [2024-02-14 19:20:29.729063] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.534 [2024-02-14 19:20:29.729067] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x676410) 00:19:52.534 [2024-02-14 19:20:29.729074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.534 [2024-02-14 19:20:29.729099] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5c90, cid 3, qid 0 00:19:52.534 [2024-02-14 19:20:29.729178] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.534 [2024-02-14 19:20:29.729185] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.534 [2024-02-14 19:20:29.729188] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.534 [2024-02-14 19:20:29.729192] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b5c90) on tqpair=0x676410 00:19:52.534 [2024-02-14 19:20:29.729202] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.534 [2024-02-14 19:20:29.729207] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.534 [2024-02-14 19:20:29.729211] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x676410) 00:19:52.534 [2024-02-14 19:20:29.729218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.534 [2024-02-14 19:20:29.729238] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5c90, cid 3, qid 0 00:19:52.534 [2024-02-14 19:20:29.729706] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.535 [2024-02-14 19:20:29.729722] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.535 [2024-02-14 19:20:29.729727] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.535 [2024-02-14 19:20:29.729731] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b5c90) on tqpair=0x676410 00:19:52.535 [2024-02-14 19:20:29.729742] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.535 [2024-02-14 19:20:29.729747] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.535 [2024-02-14 19:20:29.729751] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x676410) 00:19:52.535 [2024-02-14 19:20:29.729758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.535 [2024-02-14 19:20:29.729782] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5c90, cid 3, qid 0 00:19:52.535 [2024-02-14 19:20:29.729859] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.535 [2024-02-14 19:20:29.729866] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.535 [2024-02-14 19:20:29.729869] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.535 [2024-02-14 19:20:29.729873] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b5c90) on tqpair=0x676410 00:19:52.535 [2024-02-14 19:20:29.729884] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.535 [2024-02-14 19:20:29.729888] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.535 [2024-02-14 19:20:29.729892] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x676410) 00:19:52.535 [2024-02-14 19:20:29.729899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.535 [2024-02-14 19:20:29.729920] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5c90, cid 3, qid 0 00:19:52.535 [2024-02-14 19:20:29.730356] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.535 [2024-02-14 19:20:29.730370] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.535 [2024-02-14 19:20:29.730375] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.535 [2024-02-14 19:20:29.730379] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b5c90) on tqpair=0x676410 00:19:52.535 [2024-02-14 19:20:29.730390] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.535 [2024-02-14 19:20:29.730395] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.535 [2024-02-14 19:20:29.730399] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x676410) 00:19:52.535 [2024-02-14 19:20:29.730406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.535 [2024-02-14 19:20:29.730429] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5c90, cid 3, qid 0 00:19:52.535 [2024-02-14 19:20:29.734515] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.535 [2024-02-14 19:20:29.734535] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.535 [2024-02-14 19:20:29.734540] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.535 [2024-02-14 19:20:29.734544] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b5c90) on tqpair=0x676410 00:19:52.535 [2024-02-14 19:20:29.734558] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:52.535 [2024-02-14 19:20:29.734563] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:52.535 [2024-02-14 19:20:29.734567] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x676410) 00:19:52.535 [2024-02-14 19:20:29.734575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.535 [2024-02-14 19:20:29.734603] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b5c90, cid 3, qid 0 00:19:52.535 [2024-02-14 19:20:29.734689] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:52.535 [2024-02-14 19:20:29.734696] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:52.535 [2024-02-14 19:20:29.734700] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:52.535 [2024-02-14 19:20:29.734703] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6b5c90) on tqpair=0x676410 00:19:52.535 [2024-02-14 19:20:29.734712] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:19:52.535 0 Kelvin (-273 Celsius) 00:19:52.535 Available Spare: 0% 00:19:52.535 Available Spare Threshold: 0% 00:19:52.535 Life Percentage Used: 0% 00:19:52.535 Data Units Read: 0 00:19:52.535 Data Units Written: 0 00:19:52.535 Host Read Commands: 0 00:19:52.535 Host Write Commands: 0 00:19:52.535 Controller Busy Time: 0 minutes 00:19:52.535 Power Cycles: 0 00:19:52.535 Power On Hours: 0 hours 00:19:52.535 Unsafe Shutdowns: 0 00:19:52.535 Unrecoverable Media Errors: 0 00:19:52.535 Lifetime Error Log Entries: 0 00:19:52.535 Warning Temperature Time: 0 minutes 00:19:52.535 Critical Temperature Time: 0 minutes 00:19:52.535 00:19:52.535 Number of Queues 00:19:52.535 ================ 00:19:52.535 Number of I/O Submission Queues: 127 00:19:52.535 Number of I/O Completion Queues: 127 00:19:52.535 00:19:52.535 Active Namespaces 00:19:52.535 ================= 00:19:52.535 Namespace ID:1 00:19:52.535 Error Recovery Timeout: Unlimited 00:19:52.535 Command Set Identifier: NVM (00h) 00:19:52.535 Deallocate: Supported 00:19:52.535 Deallocated/Unwritten Error: Not Supported 00:19:52.535 Deallocated Read Value: Unknown 00:19:52.535 Deallocate in Write Zeroes: Not Supported 00:19:52.535 Deallocated Guard Field: 0xFFFF 00:19:52.535 Flush: Supported 00:19:52.535 Reservation: Supported 00:19:52.535 Namespace Sharing Capabilities: Multiple Controllers 00:19:52.535 Size (in LBAs): 131072 (0GiB) 00:19:52.535 Capacity (in LBAs): 131072 (0GiB) 00:19:52.535 Utilization (in LBAs): 131072 (0GiB) 00:19:52.535 NGUID: ABCDEF0123456789ABCDEF0123456789 00:19:52.535 EUI64: ABCDEF0123456789 00:19:52.535 UUID: c856abcd-be3c-4c56-9901-572a614c8b23 00:19:52.535 Thin Provisioning: Not Supported 00:19:52.535 Per-NS Atomic Units: Yes 00:19:52.535 Atomic Boundary Size (Normal): 0 00:19:52.535 Atomic Boundary Size (PFail): 0 00:19:52.535 Atomic Boundary Offset: 0 00:19:52.535 Maximum Single Source Range Length: 65535 00:19:52.535 Maximum Copy Length: 65535 00:19:52.535 Maximum Source Range Count: 1 00:19:52.535 NGUID/EUI64 Never Reused: No 00:19:52.535 Namespace Write Protected: No 00:19:52.535 Number of LBA Formats: 1 00:19:52.535 Current LBA Format: LBA Format #00 00:19:52.535 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:52.535 00:19:52.535 19:20:29 -- host/identify.sh@51 -- # sync 00:19:52.535 19:20:29 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:52.535 19:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.535 19:20:29 -- common/autotest_common.sh@10 -- # set +x 00:19:52.535 19:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.535 19:20:29 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:19:52.535 19:20:29 -- host/identify.sh@56 -- # nvmftestfini 00:19:52.535 19:20:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:52.535 19:20:29 -- nvmf/common.sh@116 -- # sync 00:19:52.535 19:20:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:52.535 19:20:29 -- nvmf/common.sh@119 -- # set +e 00:19:52.535 19:20:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:52.535 19:20:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:52.535 rmmod nvme_tcp 00:19:52.535 rmmod nvme_fabrics 00:19:52.535 rmmod nvme_keyring 00:19:52.535 19:20:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:52.535 19:20:29 -- nvmf/common.sh@123 -- # set -e 00:19:52.535 19:20:29 -- nvmf/common.sh@124 -- # return 0 00:19:52.535 19:20:29 -- nvmf/common.sh@477 -- # '[' -n 81383 ']' 00:19:52.535 19:20:29 -- nvmf/common.sh@478 -- # killprocess 81383 00:19:52.535 19:20:29 -- common/autotest_common.sh@924 -- # '[' -z 81383 ']' 00:19:52.535 19:20:29 -- common/autotest_common.sh@928 -- # kill -0 81383 00:19:52.535 19:20:29 -- common/autotest_common.sh@929 -- # uname 00:19:52.535 19:20:29 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:52.535 19:20:29 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 81383 00:19:52.535 killing process with pid 81383 00:19:52.535 19:20:29 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:19:52.535 19:20:29 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:19:52.535 19:20:29 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 81383' 00:19:52.535 19:20:29 -- common/autotest_common.sh@943 -- # kill 81383 00:19:52.535 [2024-02-14 19:20:29.877601] app.c: 881:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:19:52.535 19:20:29 -- common/autotest_common.sh@948 -- # wait 81383 00:19:53.104 19:20:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:53.104 19:20:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:53.104 19:20:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:53.104 19:20:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:53.104 19:20:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:53.104 19:20:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.104 19:20:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.104 19:20:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.104 19:20:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:53.104 ************************************ 00:19:53.104 END TEST nvmf_identify 00:19:53.104 ************************************ 00:19:53.104 00:19:53.104 real 0m2.635s 00:19:53.104 user 0m7.186s 00:19:53.104 sys 0m0.668s 00:19:53.104 19:20:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:53.104 19:20:30 -- common/autotest_common.sh@10 -- # set +x 00:19:53.104 19:20:30 -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:53.104 19:20:30 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:19:53.104 19:20:30 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:19:53.104 19:20:30 -- common/autotest_common.sh@10 -- # set +x 00:19:53.104 ************************************ 00:19:53.104 START TEST nvmf_perf 00:19:53.104 ************************************ 00:19:53.104 19:20:30 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:53.104 * Looking for test storage... 00:19:53.104 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:53.104 19:20:30 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:53.104 19:20:30 -- nvmf/common.sh@7 -- # uname -s 00:19:53.104 19:20:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:53.104 19:20:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:53.104 19:20:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:53.104 19:20:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:53.104 19:20:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:53.104 19:20:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:53.104 19:20:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:53.104 19:20:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:53.104 19:20:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:53.104 19:20:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:53.104 19:20:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:19:53.104 19:20:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:19:53.104 19:20:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:53.104 19:20:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:53.104 19:20:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:53.104 19:20:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:53.104 19:20:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:53.104 19:20:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:53.104 19:20:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:53.104 19:20:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.104 19:20:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.104 19:20:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.104 19:20:30 -- paths/export.sh@5 -- # export PATH 00:19:53.104 19:20:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.104 19:20:30 -- nvmf/common.sh@46 -- # : 0 00:19:53.104 19:20:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:53.104 19:20:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:53.104 19:20:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:53.104 19:20:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:53.104 19:20:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:53.104 19:20:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:53.104 19:20:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:53.104 19:20:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:53.104 19:20:30 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:53.104 19:20:30 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:53.104 19:20:30 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:53.104 19:20:30 -- host/perf.sh@17 -- # nvmftestinit 00:19:53.104 19:20:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:53.104 19:20:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:53.104 19:20:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:53.104 19:20:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:53.104 19:20:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:53.104 19:20:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.105 19:20:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.105 19:20:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.105 19:20:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:53.105 19:20:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:53.105 19:20:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:53.105 19:20:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:53.105 19:20:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:53.105 19:20:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:53.105 19:20:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:53.105 19:20:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:53.105 19:20:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:53.105 19:20:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:53.105 19:20:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:53.105 19:20:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:53.105 19:20:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:53.105 19:20:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:53.105 19:20:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:53.105 19:20:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:53.105 19:20:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:53.105 19:20:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:53.105 19:20:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:53.105 19:20:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:53.105 Cannot find device "nvmf_tgt_br" 00:19:53.105 19:20:30 -- nvmf/common.sh@154 -- # true 00:19:53.105 19:20:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:53.105 Cannot find device "nvmf_tgt_br2" 00:19:53.105 19:20:30 -- nvmf/common.sh@155 -- # true 00:19:53.105 19:20:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:53.105 19:20:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:53.105 Cannot find device "nvmf_tgt_br" 00:19:53.105 19:20:30 -- nvmf/common.sh@157 -- # true 00:19:53.105 19:20:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:53.105 Cannot find device "nvmf_tgt_br2" 00:19:53.105 19:20:30 -- nvmf/common.sh@158 -- # true 00:19:53.105 19:20:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:53.105 19:20:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:53.364 19:20:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:53.364 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:53.364 19:20:30 -- nvmf/common.sh@161 -- # true 00:19:53.364 19:20:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:53.364 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:53.364 19:20:30 -- nvmf/common.sh@162 -- # true 00:19:53.364 19:20:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:53.365 19:20:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:53.365 19:20:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:53.365 19:20:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:53.365 19:20:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:53.365 19:20:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:53.365 19:20:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:53.365 19:20:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:53.365 19:20:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:53.365 19:20:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:53.365 19:20:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:53.365 19:20:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:53.365 19:20:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:53.365 19:20:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:53.365 19:20:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:53.365 19:20:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:53.365 19:20:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:53.365 19:20:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:53.365 19:20:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:53.365 19:20:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:53.365 19:20:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:53.365 19:20:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:53.365 19:20:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:53.365 19:20:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:53.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:53.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:19:53.365 00:19:53.365 --- 10.0.0.2 ping statistics --- 00:19:53.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.365 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:19:53.365 19:20:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:53.365 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:53.365 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:19:53.365 00:19:53.365 --- 10.0.0.3 ping statistics --- 00:19:53.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.365 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:19:53.365 19:20:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:53.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:53.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:19:53.365 00:19:53.365 --- 10.0.0.1 ping statistics --- 00:19:53.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.365 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:19:53.365 19:20:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:53.365 19:20:30 -- nvmf/common.sh@421 -- # return 0 00:19:53.365 19:20:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:53.365 19:20:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:53.365 19:20:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:53.365 19:20:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:53.365 19:20:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:53.365 19:20:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:53.365 19:20:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:53.365 19:20:30 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:19:53.365 19:20:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:53.365 19:20:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:53.365 19:20:30 -- common/autotest_common.sh@10 -- # set +x 00:19:53.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.365 19:20:30 -- nvmf/common.sh@469 -- # nvmfpid=81612 00:19:53.365 19:20:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:53.365 19:20:30 -- nvmf/common.sh@470 -- # waitforlisten 81612 00:19:53.365 19:20:30 -- common/autotest_common.sh@817 -- # '[' -z 81612 ']' 00:19:53.365 19:20:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.365 19:20:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:53.365 19:20:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.365 19:20:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:53.365 19:20:30 -- common/autotest_common.sh@10 -- # set +x 00:19:53.624 [2024-02-14 19:20:30.825572] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:19:53.624 [2024-02-14 19:20:30.825659] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.624 [2024-02-14 19:20:30.967472] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:53.883 [2024-02-14 19:20:31.055804] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:53.883 [2024-02-14 19:20:31.055968] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.883 [2024-02-14 19:20:31.055982] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.883 [2024-02-14 19:20:31.055991] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.883 [2024-02-14 19:20:31.056519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.883 [2024-02-14 19:20:31.056624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.883 [2024-02-14 19:20:31.056714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:53.883 [2024-02-14 19:20:31.056728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.450 19:20:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:54.450 19:20:31 -- common/autotest_common.sh@850 -- # return 0 00:19:54.450 19:20:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:54.450 19:20:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:54.450 19:20:31 -- common/autotest_common.sh@10 -- # set +x 00:19:54.450 19:20:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:54.450 19:20:31 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:54.450 19:20:31 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:19:55.018 19:20:32 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:19:55.018 19:20:32 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:19:55.018 19:20:32 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:19:55.018 19:20:32 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:55.277 19:20:32 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:19:55.277 19:20:32 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:19:55.277 19:20:32 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:19:55.277 19:20:32 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:19:55.277 19:20:32 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:55.536 [2024-02-14 19:20:32.829553] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.536 19:20:32 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:55.795 19:20:33 -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:55.795 19:20:33 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:56.052 19:20:33 -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:56.052 19:20:33 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:19:56.052 19:20:33 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:56.311 [2024-02-14 19:20:33.639179] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:56.311 19:20:33 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:56.569 19:20:33 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:19:56.569 19:20:33 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:19:56.569 19:20:33 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:19:56.569 19:20:33 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:19:57.947 Initializing NVMe Controllers 00:19:57.947 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:19:57.947 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:19:57.947 Initialization complete. Launching workers. 00:19:57.947 ======================================================== 00:19:57.947 Latency(us) 00:19:57.947 Device Information : IOPS MiB/s Average min max 00:19:57.947 PCIE (0000:00:06.0) NSID 1 from core 0: 23676.51 92.49 1351.93 350.24 8446.15 00:19:57.947 ======================================================== 00:19:57.947 Total : 23676.51 92.49 1351.93 350.24 8446.15 00:19:57.947 00:19:57.947 19:20:34 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:59.324 Initializing NVMe Controllers 00:19:59.324 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:59.324 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:59.324 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:59.324 Initialization complete. Launching workers. 00:19:59.324 ======================================================== 00:19:59.324 Latency(us) 00:19:59.324 Device Information : IOPS MiB/s Average min max 00:19:59.324 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3297.83 12.88 302.94 115.33 7140.25 00:19:59.324 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 121.99 0.48 8261.95 4970.92 12036.61 00:19:59.324 ======================================================== 00:19:59.324 Total : 3419.82 13.36 586.86 115.33 12036.61 00:19:59.324 00:19:59.324 19:20:36 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:00.261 [2024-02-14 19:20:37.571557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb950 is same with the state(5) to be set 00:20:00.261 [2024-02-14 19:20:37.571618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb950 is same with the state(5) to be set 00:20:00.261 [2024-02-14 19:20:37.571647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb950 is same with the state(5) to be set 00:20:00.519 Initializing NVMe Controllers 00:20:00.519 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:00.519 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:00.519 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:00.519 Initialization complete. Launching workers. 00:20:00.519 ======================================================== 00:20:00.519 Latency(us) 00:20:00.519 Device Information : IOPS MiB/s Average min max 00:20:00.519 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10369.07 40.50 3086.09 403.32 6584.97 00:20:00.519 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2669.50 10.43 12059.50 7355.67 20192.83 00:20:00.520 ======================================================== 00:20:00.520 Total : 13038.57 50.93 4923.29 403.32 20192.83 00:20:00.520 00:20:00.520 19:20:37 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:00.520 19:20:37 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:03.052 [2024-02-14 19:20:40.144575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878610 is same with the state(5) to be set 00:20:03.052 Initializing NVMe Controllers 00:20:03.052 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:03.052 Controller IO queue size 128, less than required. 00:20:03.052 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:03.052 Controller IO queue size 128, less than required. 00:20:03.052 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:03.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:03.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:03.052 Initialization complete. Launching workers. 00:20:03.052 ======================================================== 00:20:03.052 Latency(us) 00:20:03.052 Device Information : IOPS MiB/s Average min max 00:20:03.052 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1461.07 365.27 89215.42 58629.74 163035.74 00:20:03.052 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 541.34 135.33 247902.96 123303.23 379453.39 00:20:03.052 ======================================================== 00:20:03.052 Total : 2002.40 500.60 132115.72 58629.74 379453.39 00:20:03.052 00:20:03.052 19:20:40 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:03.052 No valid NVMe controllers or AIO or URING devices found 00:20:03.311 Initializing NVMe Controllers 00:20:03.311 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:03.311 Controller IO queue size 128, less than required. 00:20:03.311 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:03.311 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:03.311 Controller IO queue size 128, less than required. 00:20:03.311 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:03.311 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:03.311 WARNING: Some requested NVMe devices were skipped 00:20:03.311 19:20:40 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:05.846 Initializing NVMe Controllers 00:20:05.846 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:05.846 Controller IO queue size 128, less than required. 00:20:05.846 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:05.846 Controller IO queue size 128, less than required. 00:20:05.846 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:05.846 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:05.846 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:05.846 Initialization complete. Launching workers. 00:20:05.846 00:20:05.846 ==================== 00:20:05.846 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:05.846 TCP transport: 00:20:05.846 polls: 5513 00:20:05.846 idle_polls: 2561 00:20:05.846 sock_completions: 2952 00:20:05.846 nvme_completions: 3817 00:20:05.846 submitted_requests: 5706 00:20:05.846 queued_requests: 1 00:20:05.846 00:20:05.846 ==================== 00:20:05.846 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:05.846 TCP transport: 00:20:05.846 polls: 10982 00:20:05.846 idle_polls: 8037 00:20:05.846 sock_completions: 2945 00:20:05.846 nvme_completions: 5851 00:20:05.846 submitted_requests: 8768 00:20:05.846 queued_requests: 1 00:20:05.846 ======================================================== 00:20:05.846 Latency(us) 00:20:05.846 Device Information : IOPS MiB/s Average min max 00:20:05.846 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 951.86 237.96 138614.89 85547.57 223070.48 00:20:05.846 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1459.22 364.80 88075.87 53491.22 134519.16 00:20:05.846 ======================================================== 00:20:05.846 Total : 2411.07 602.77 108027.96 53491.22 223070.48 00:20:05.846 00:20:05.846 19:20:42 -- host/perf.sh@66 -- # sync 00:20:05.846 19:20:43 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:05.846 19:20:43 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:20:05.846 19:20:43 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:20:05.846 19:20:43 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:20:06.105 19:20:43 -- host/perf.sh@72 -- # ls_guid=16edcc59-7da6-441a-a91f-8f154c765fe2 00:20:06.105 19:20:43 -- host/perf.sh@73 -- # get_lvs_free_mb 16edcc59-7da6-441a-a91f-8f154c765fe2 00:20:06.105 19:20:43 -- common/autotest_common.sh@1341 -- # local lvs_uuid=16edcc59-7da6-441a-a91f-8f154c765fe2 00:20:06.105 19:20:43 -- common/autotest_common.sh@1342 -- # local lvs_info 00:20:06.105 19:20:43 -- common/autotest_common.sh@1343 -- # local fc 00:20:06.105 19:20:43 -- common/autotest_common.sh@1344 -- # local cs 00:20:06.105 19:20:43 -- common/autotest_common.sh@1345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:06.364 19:20:43 -- common/autotest_common.sh@1345 -- # lvs_info='[ 00:20:06.364 { 00:20:06.364 "base_bdev": "Nvme0n1", 00:20:06.364 "block_size": 4096, 00:20:06.364 "cluster_size": 4194304, 00:20:06.364 "free_clusters": 1278, 00:20:06.364 "name": "lvs_0", 00:20:06.364 "total_data_clusters": 1278, 00:20:06.364 "uuid": "16edcc59-7da6-441a-a91f-8f154c765fe2" 00:20:06.364 } 00:20:06.364 ]' 00:20:06.364 19:20:43 -- common/autotest_common.sh@1346 -- # jq '.[] | select(.uuid=="16edcc59-7da6-441a-a91f-8f154c765fe2") .free_clusters' 00:20:06.364 19:20:43 -- common/autotest_common.sh@1346 -- # fc=1278 00:20:06.364 19:20:43 -- common/autotest_common.sh@1347 -- # jq '.[] | select(.uuid=="16edcc59-7da6-441a-a91f-8f154c765fe2") .cluster_size' 00:20:06.624 5112 00:20:06.624 19:20:43 -- common/autotest_common.sh@1347 -- # cs=4194304 00:20:06.624 19:20:43 -- common/autotest_common.sh@1350 -- # free_mb=5112 00:20:06.624 19:20:43 -- common/autotest_common.sh@1351 -- # echo 5112 00:20:06.624 19:20:43 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:20:06.624 19:20:43 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 16edcc59-7da6-441a-a91f-8f154c765fe2 lbd_0 5112 00:20:06.883 19:20:44 -- host/perf.sh@80 -- # lb_guid=b5c15a50-d026-41b6-9413-75caae4378af 00:20:06.883 19:20:44 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore b5c15a50-d026-41b6-9413-75caae4378af lvs_n_0 00:20:07.143 19:20:44 -- host/perf.sh@83 -- # ls_nested_guid=b16d2e57-e9d2-4a3d-8a0d-a706c2ee7e6f 00:20:07.143 19:20:44 -- host/perf.sh@84 -- # get_lvs_free_mb b16d2e57-e9d2-4a3d-8a0d-a706c2ee7e6f 00:20:07.143 19:20:44 -- common/autotest_common.sh@1341 -- # local lvs_uuid=b16d2e57-e9d2-4a3d-8a0d-a706c2ee7e6f 00:20:07.143 19:20:44 -- common/autotest_common.sh@1342 -- # local lvs_info 00:20:07.143 19:20:44 -- common/autotest_common.sh@1343 -- # local fc 00:20:07.143 19:20:44 -- common/autotest_common.sh@1344 -- # local cs 00:20:07.143 19:20:44 -- common/autotest_common.sh@1345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:07.403 19:20:44 -- common/autotest_common.sh@1345 -- # lvs_info='[ 00:20:07.403 { 00:20:07.403 "base_bdev": "Nvme0n1", 00:20:07.403 "block_size": 4096, 00:20:07.403 "cluster_size": 4194304, 00:20:07.403 "free_clusters": 0, 00:20:07.403 "name": "lvs_0", 00:20:07.403 "total_data_clusters": 1278, 00:20:07.403 "uuid": "16edcc59-7da6-441a-a91f-8f154c765fe2" 00:20:07.403 }, 00:20:07.403 { 00:20:07.403 "base_bdev": "b5c15a50-d026-41b6-9413-75caae4378af", 00:20:07.403 "block_size": 4096, 00:20:07.403 "cluster_size": 4194304, 00:20:07.403 "free_clusters": 1276, 00:20:07.403 "name": "lvs_n_0", 00:20:07.403 "total_data_clusters": 1276, 00:20:07.403 "uuid": "b16d2e57-e9d2-4a3d-8a0d-a706c2ee7e6f" 00:20:07.403 } 00:20:07.403 ]' 00:20:07.403 19:20:44 -- common/autotest_common.sh@1346 -- # jq '.[] | select(.uuid=="b16d2e57-e9d2-4a3d-8a0d-a706c2ee7e6f") .free_clusters' 00:20:07.403 19:20:44 -- common/autotest_common.sh@1346 -- # fc=1276 00:20:07.403 19:20:44 -- common/autotest_common.sh@1347 -- # jq '.[] | select(.uuid=="b16d2e57-e9d2-4a3d-8a0d-a706c2ee7e6f") .cluster_size' 00:20:07.403 5104 00:20:07.403 19:20:44 -- common/autotest_common.sh@1347 -- # cs=4194304 00:20:07.403 19:20:44 -- common/autotest_common.sh@1350 -- # free_mb=5104 00:20:07.403 19:20:44 -- common/autotest_common.sh@1351 -- # echo 5104 00:20:07.403 19:20:44 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:20:07.403 19:20:44 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b16d2e57-e9d2-4a3d-8a0d-a706c2ee7e6f lbd_nest_0 5104 00:20:07.663 19:20:44 -- host/perf.sh@88 -- # lb_nested_guid=04d67635-37c4-4f0a-8886-fa859335f58f 00:20:07.663 19:20:44 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:07.921 19:20:45 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:20:07.921 19:20:45 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 04d67635-37c4-4f0a-8886-fa859335f58f 00:20:08.186 19:20:45 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:08.186 19:20:45 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:20:08.186 19:20:45 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:20:08.186 19:20:45 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:08.186 19:20:45 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:08.186 19:20:45 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:08.447 No valid NVMe controllers or AIO or URING devices found 00:20:08.706 Initializing NVMe Controllers 00:20:08.706 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:08.706 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:08.706 WARNING: Some requested NVMe devices were skipped 00:20:08.706 19:20:45 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:08.706 19:20:45 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:20.924 Initializing NVMe Controllers 00:20:20.924 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:20.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:20.924 Initialization complete. Launching workers. 00:20:20.924 ======================================================== 00:20:20.924 Latency(us) 00:20:20.924 Device Information : IOPS MiB/s Average min max 00:20:20.924 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 839.54 104.94 1190.38 397.24 7751.95 00:20:20.924 ======================================================== 00:20:20.924 Total : 839.54 104.94 1190.38 397.24 7751.95 00:20:20.924 00:20:20.924 19:20:56 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:20.924 19:20:56 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:20.924 19:20:56 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:20.924 No valid NVMe controllers or AIO or URING devices found 00:20:20.924 Initializing NVMe Controllers 00:20:20.924 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:20.924 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:20.924 WARNING: Some requested NVMe devices were skipped 00:20:20.924 19:20:56 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:20.924 19:20:56 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:30.925 Initializing NVMe Controllers 00:20:30.925 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:30.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:30.925 Initialization complete. Launching workers. 00:20:30.925 ======================================================== 00:20:30.925 Latency(us) 00:20:30.925 Device Information : IOPS MiB/s Average min max 00:20:30.925 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1133.15 141.64 28281.04 8006.49 79757.39 00:20:30.925 ======================================================== 00:20:30.925 Total : 1133.15 141.64 28281.04 8006.49 79757.39 00:20:30.925 00:20:30.925 19:21:06 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:30.925 19:21:06 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:30.925 19:21:06 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:30.925 No valid NVMe controllers or AIO or URING devices found 00:20:30.925 Initializing NVMe Controllers 00:20:30.925 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:30.925 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:30.925 WARNING: Some requested NVMe devices were skipped 00:20:30.925 19:21:06 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:30.925 19:21:06 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:40.946 Initializing NVMe Controllers 00:20:40.946 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:40.946 Controller IO queue size 128, less than required. 00:20:40.946 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:40.946 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:40.946 Initialization complete. Launching workers. 00:20:40.946 ======================================================== 00:20:40.946 Latency(us) 00:20:40.946 Device Information : IOPS MiB/s Average min max 00:20:40.947 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4125.33 515.67 31068.47 11479.60 63758.32 00:20:40.947 ======================================================== 00:20:40.947 Total : 4125.33 515.67 31068.47 11479.60 63758.32 00:20:40.947 00:20:40.947 19:21:17 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:40.947 19:21:17 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 04d67635-37c4-4f0a-8886-fa859335f58f 00:20:40.947 19:21:17 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:20:40.947 19:21:18 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b5c15a50-d026-41b6-9413-75caae4378af 00:20:41.205 19:21:18 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:20:41.205 19:21:18 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:41.205 19:21:18 -- host/perf.sh@114 -- # nvmftestfini 00:20:41.205 19:21:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:41.205 19:21:18 -- nvmf/common.sh@116 -- # sync 00:20:41.205 19:21:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:41.205 19:21:18 -- nvmf/common.sh@119 -- # set +e 00:20:41.205 19:21:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:41.205 19:21:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:41.205 rmmod nvme_tcp 00:20:41.205 rmmod nvme_fabrics 00:20:41.205 rmmod nvme_keyring 00:20:41.464 19:21:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:41.464 19:21:18 -- nvmf/common.sh@123 -- # set -e 00:20:41.464 19:21:18 -- nvmf/common.sh@124 -- # return 0 00:20:41.464 19:21:18 -- nvmf/common.sh@477 -- # '[' -n 81612 ']' 00:20:41.464 19:21:18 -- nvmf/common.sh@478 -- # killprocess 81612 00:20:41.464 19:21:18 -- common/autotest_common.sh@924 -- # '[' -z 81612 ']' 00:20:41.464 19:21:18 -- common/autotest_common.sh@928 -- # kill -0 81612 00:20:41.464 19:21:18 -- common/autotest_common.sh@929 -- # uname 00:20:41.464 19:21:18 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:20:41.464 19:21:18 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 81612 00:20:41.464 killing process with pid 81612 00:20:41.464 19:21:18 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:20:41.464 19:21:18 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:20:41.464 19:21:18 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 81612' 00:20:41.464 19:21:18 -- common/autotest_common.sh@943 -- # kill 81612 00:20:41.464 19:21:18 -- common/autotest_common.sh@948 -- # wait 81612 00:20:43.370 19:21:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:43.370 19:21:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:43.370 19:21:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:43.370 19:21:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:43.370 19:21:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:43.370 19:21:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.370 19:21:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:43.370 19:21:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.370 19:21:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:43.370 ************************************ 00:20:43.370 END TEST nvmf_perf 00:20:43.370 ************************************ 00:20:43.370 00:20:43.370 real 0m50.099s 00:20:43.370 user 3m8.410s 00:20:43.370 sys 0m10.472s 00:20:43.370 19:21:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:43.370 19:21:20 -- common/autotest_common.sh@10 -- # set +x 00:20:43.370 19:21:20 -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:43.370 19:21:20 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:20:43.371 19:21:20 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:20:43.371 19:21:20 -- common/autotest_common.sh@10 -- # set +x 00:20:43.371 ************************************ 00:20:43.371 START TEST nvmf_fio_host 00:20:43.371 ************************************ 00:20:43.371 19:21:20 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:43.371 * Looking for test storage... 00:20:43.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:43.371 19:21:20 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:43.371 19:21:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:43.371 19:21:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:43.371 19:21:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:43.371 19:21:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.371 19:21:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.371 19:21:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.371 19:21:20 -- paths/export.sh@5 -- # export PATH 00:20:43.371 19:21:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.371 19:21:20 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:43.371 19:21:20 -- nvmf/common.sh@7 -- # uname -s 00:20:43.371 19:21:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:43.371 19:21:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:43.371 19:21:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:43.371 19:21:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:43.371 19:21:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:43.371 19:21:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:43.371 19:21:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:43.371 19:21:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:43.371 19:21:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:43.371 19:21:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:43.371 19:21:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:20:43.371 19:21:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:20:43.371 19:21:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:43.371 19:21:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:43.371 19:21:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:43.371 19:21:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:43.371 19:21:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:43.371 19:21:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:43.371 19:21:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:43.371 19:21:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.371 19:21:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.371 19:21:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.371 19:21:20 -- paths/export.sh@5 -- # export PATH 00:20:43.371 19:21:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.371 19:21:20 -- nvmf/common.sh@46 -- # : 0 00:20:43.371 19:21:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:43.371 19:21:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:43.371 19:21:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:43.371 19:21:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:43.371 19:21:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:43.371 19:21:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:43.371 19:21:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:43.371 19:21:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:43.371 19:21:20 -- host/fio.sh@12 -- # nvmftestinit 00:20:43.371 19:21:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:43.371 19:21:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:43.371 19:21:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:43.371 19:21:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:43.371 19:21:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:43.371 19:21:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.371 19:21:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:43.371 19:21:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.371 19:21:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:43.371 19:21:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:43.371 19:21:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:43.371 19:21:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:43.371 19:21:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:43.371 19:21:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:43.371 19:21:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:43.371 19:21:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:43.371 19:21:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:43.371 19:21:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:43.371 19:21:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:43.371 19:21:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:43.371 19:21:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:43.371 19:21:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:43.371 19:21:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:43.371 19:21:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:43.371 19:21:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:43.371 19:21:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:43.371 19:21:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:43.371 19:21:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:43.371 Cannot find device "nvmf_tgt_br" 00:20:43.371 19:21:20 -- nvmf/common.sh@154 -- # true 00:20:43.371 19:21:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:43.371 Cannot find device "nvmf_tgt_br2" 00:20:43.371 19:21:20 -- nvmf/common.sh@155 -- # true 00:20:43.371 19:21:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:43.371 19:21:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:43.371 Cannot find device "nvmf_tgt_br" 00:20:43.371 19:21:20 -- nvmf/common.sh@157 -- # true 00:20:43.371 19:21:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:43.371 Cannot find device "nvmf_tgt_br2" 00:20:43.371 19:21:20 -- nvmf/common.sh@158 -- # true 00:20:43.371 19:21:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:43.371 19:21:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:43.372 19:21:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:43.372 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:43.372 19:21:20 -- nvmf/common.sh@161 -- # true 00:20:43.372 19:21:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:43.372 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:43.372 19:21:20 -- nvmf/common.sh@162 -- # true 00:20:43.372 19:21:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:43.372 19:21:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:43.372 19:21:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:43.372 19:21:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:43.372 19:21:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:43.372 19:21:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:43.372 19:21:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:43.372 19:21:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:43.372 19:21:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:43.372 19:21:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:43.372 19:21:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:43.372 19:21:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:43.372 19:21:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:43.372 19:21:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:43.372 19:21:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:43.372 19:21:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:43.631 19:21:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:43.631 19:21:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:43.631 19:21:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:43.631 19:21:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:43.631 19:21:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:43.631 19:21:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:43.631 19:21:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:43.631 19:21:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:43.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:43.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:20:43.631 00:20:43.632 --- 10.0.0.2 ping statistics --- 00:20:43.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.632 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:20:43.632 19:21:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:43.632 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:43.632 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:20:43.632 00:20:43.632 --- 10.0.0.3 ping statistics --- 00:20:43.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.632 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:20:43.632 19:21:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:43.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:20:43.632 00:20:43.632 --- 10.0.0.1 ping statistics --- 00:20:43.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.632 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:20:43.632 19:21:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.632 19:21:20 -- nvmf/common.sh@421 -- # return 0 00:20:43.632 19:21:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:43.632 19:21:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:43.632 19:21:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:43.632 19:21:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:43.632 19:21:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:43.632 19:21:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:43.632 19:21:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:43.632 19:21:20 -- host/fio.sh@14 -- # [[ y != y ]] 00:20:43.632 19:21:20 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:20:43.632 19:21:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:43.632 19:21:20 -- common/autotest_common.sh@10 -- # set +x 00:20:43.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.632 19:21:20 -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:43.632 19:21:20 -- host/fio.sh@22 -- # nvmfpid=82576 00:20:43.632 19:21:20 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:43.632 19:21:20 -- host/fio.sh@26 -- # waitforlisten 82576 00:20:43.632 19:21:20 -- common/autotest_common.sh@817 -- # '[' -z 82576 ']' 00:20:43.632 19:21:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.632 19:21:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:43.632 19:21:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.632 19:21:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:43.632 19:21:20 -- common/autotest_common.sh@10 -- # set +x 00:20:43.632 [2024-02-14 19:21:20.963016] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:20:43.632 [2024-02-14 19:21:20.963095] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.891 [2024-02-14 19:21:21.098272] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:43.891 [2024-02-14 19:21:21.194481] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:43.891 [2024-02-14 19:21:21.194692] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.891 [2024-02-14 19:21:21.194715] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.891 [2024-02-14 19:21:21.194724] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.891 [2024-02-14 19:21:21.194856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.891 [2024-02-14 19:21:21.195734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.891 [2024-02-14 19:21:21.195810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:43.891 [2024-02-14 19:21:21.195826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.830 19:21:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:44.830 19:21:21 -- common/autotest_common.sh@850 -- # return 0 00:20:44.830 19:21:21 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:44.830 19:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.830 19:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:44.830 [2024-02-14 19:21:21.899666] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.830 19:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.830 19:21:21 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:20:44.830 19:21:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:44.830 19:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:44.830 19:21:21 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:44.830 19:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.830 19:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:44.830 Malloc1 00:20:44.830 19:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.830 19:21:21 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:44.830 19:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.830 19:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:44.830 19:21:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.830 19:21:22 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:44.830 19:21:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.830 19:21:22 -- common/autotest_common.sh@10 -- # set +x 00:20:44.830 19:21:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.830 19:21:22 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:44.830 19:21:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.830 19:21:22 -- common/autotest_common.sh@10 -- # set +x 00:20:44.830 [2024-02-14 19:21:22.013530] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.830 19:21:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.830 19:21:22 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:44.830 19:21:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.830 19:21:22 -- common/autotest_common.sh@10 -- # set +x 00:20:44.830 19:21:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.831 19:21:22 -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:20:44.831 19:21:22 -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:44.831 19:21:22 -- common/autotest_common.sh@1337 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:44.831 19:21:22 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:20:44.831 19:21:22 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:44.831 19:21:22 -- common/autotest_common.sh@1316 -- # local sanitizers 00:20:44.831 19:21:22 -- common/autotest_common.sh@1317 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:44.831 19:21:22 -- common/autotest_common.sh@1318 -- # shift 00:20:44.831 19:21:22 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:20:44.831 19:21:22 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:20:44.831 19:21:22 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:44.831 19:21:22 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:20:44.831 19:21:22 -- common/autotest_common.sh@1322 -- # grep libasan 00:20:44.831 19:21:22 -- common/autotest_common.sh@1322 -- # asan_lib= 00:20:44.831 19:21:22 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:20:44.831 19:21:22 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:20:44.831 19:21:22 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:44.831 19:21:22 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:20:44.831 19:21:22 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:20:44.831 19:21:22 -- common/autotest_common.sh@1322 -- # asan_lib= 00:20:44.831 19:21:22 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:20:44.831 19:21:22 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:44.831 19:21:22 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:44.831 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:44.831 fio-3.35 00:20:44.831 Starting 1 thread 00:20:47.366 00:20:47.366 test: (groupid=0, jobs=1): err= 0: pid=82656: Wed Feb 14 19:21:24 2024 00:20:47.366 read: IOPS=10.7k, BW=41.9MiB/s (43.9MB/s)(84.0MiB/2006msec) 00:20:47.367 slat (nsec): min=1687, max=223519, avg=2149.97, stdev=2191.08 00:20:47.367 clat (usec): min=2056, max=11261, avg=6303.20, stdev=603.69 00:20:47.367 lat (usec): min=2081, max=11263, avg=6305.35, stdev=603.63 00:20:47.367 clat percentiles (usec): 00:20:47.367 | 1.00th=[ 5145], 5.00th=[ 5473], 10.00th=[ 5604], 20.00th=[ 5866], 00:20:47.367 | 30.00th=[ 5997], 40.00th=[ 6128], 50.00th=[ 6259], 60.00th=[ 6390], 00:20:47.367 | 70.00th=[ 6521], 80.00th=[ 6718], 90.00th=[ 7046], 95.00th=[ 7308], 00:20:47.367 | 99.00th=[ 8029], 99.50th=[ 8848], 99.90th=[10159], 99.95th=[10421], 00:20:47.367 | 99.99th=[11076] 00:20:47.367 bw ( KiB/s): min=41876, max=43336, per=99.92%, avg=42843.00, stdev=660.53, samples=4 00:20:47.367 iops : min=10469, max=10834, avg=10710.75, stdev=165.13, samples=4 00:20:47.367 write: IOPS=10.7k, BW=41.8MiB/s (43.8MB/s)(83.9MiB/2006msec); 0 zone resets 00:20:47.367 slat (nsec): min=1764, max=159483, avg=2223.19, stdev=1530.80 00:20:47.367 clat (usec): min=1373, max=10497, avg=5577.94, stdev=488.92 00:20:47.367 lat (usec): min=1380, max=10499, avg=5580.16, stdev=488.92 00:20:47.367 clat percentiles (usec): 00:20:47.367 | 1.00th=[ 4555], 5.00th=[ 4883], 10.00th=[ 5080], 20.00th=[ 5211], 00:20:47.367 | 30.00th=[ 5342], 40.00th=[ 5473], 50.00th=[ 5538], 60.00th=[ 5669], 00:20:47.367 | 70.00th=[ 5800], 80.00th=[ 5932], 90.00th=[ 6128], 95.00th=[ 6259], 00:20:47.367 | 99.00th=[ 6849], 99.50th=[ 7635], 99.90th=[ 8848], 99.95th=[ 9634], 00:20:47.367 | 99.99th=[10421] 00:20:47.367 bw ( KiB/s): min=42403, max=43392, per=99.98%, avg=42800.75, stdev=442.89, samples=4 00:20:47.367 iops : min=10600, max=10848, avg=10700.00, stdev=110.95, samples=4 00:20:47.367 lat (msec) : 2=0.03%, 4=0.13%, 10=99.78%, 20=0.06% 00:20:47.367 cpu : usr=67.88%, sys=23.19%, ctx=10, majf=0, minf=7 00:20:47.367 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:47.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.367 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:47.367 issued rwts: total=21502,21468,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.367 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:47.367 00:20:47.367 Run status group 0 (all jobs): 00:20:47.367 READ: bw=41.9MiB/s (43.9MB/s), 41.9MiB/s-41.9MiB/s (43.9MB/s-43.9MB/s), io=84.0MiB (88.1MB), run=2006-2006msec 00:20:47.367 WRITE: bw=41.8MiB/s (43.8MB/s), 41.8MiB/s-41.8MiB/s (43.8MB/s-43.8MB/s), io=83.9MiB (87.9MB), run=2006-2006msec 00:20:47.367 19:21:24 -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:47.367 19:21:24 -- common/autotest_common.sh@1337 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:47.367 19:21:24 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:20:47.367 19:21:24 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:47.367 19:21:24 -- common/autotest_common.sh@1316 -- # local sanitizers 00:20:47.367 19:21:24 -- common/autotest_common.sh@1317 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:47.367 19:21:24 -- common/autotest_common.sh@1318 -- # shift 00:20:47.367 19:21:24 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:20:47.367 19:21:24 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:20:47.367 19:21:24 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:47.367 19:21:24 -- common/autotest_common.sh@1322 -- # grep libasan 00:20:47.367 19:21:24 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:20:47.367 19:21:24 -- common/autotest_common.sh@1322 -- # asan_lib= 00:20:47.367 19:21:24 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:20:47.367 19:21:24 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:20:47.367 19:21:24 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:47.367 19:21:24 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:20:47.367 19:21:24 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:20:47.367 19:21:24 -- common/autotest_common.sh@1322 -- # asan_lib= 00:20:47.367 19:21:24 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:20:47.367 19:21:24 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:47.367 19:21:24 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:47.367 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:47.367 fio-3.35 00:20:47.367 Starting 1 thread 00:20:49.902 00:20:49.902 test: (groupid=0, jobs=1): err= 0: pid=82699: Wed Feb 14 19:21:27 2024 00:20:49.902 read: IOPS=9015, BW=141MiB/s (148MB/s)(283MiB/2006msec) 00:20:49.902 slat (usec): min=2, max=111, avg= 3.42, stdev= 2.40 00:20:49.902 clat (usec): min=2075, max=23404, avg=8437.94, stdev=2207.18 00:20:49.902 lat (usec): min=2079, max=23406, avg=8441.36, stdev=2207.34 00:20:49.902 clat percentiles (usec): 00:20:49.902 | 1.00th=[ 4359], 5.00th=[ 5211], 10.00th=[ 5669], 20.00th=[ 6325], 00:20:49.902 | 30.00th=[ 7046], 40.00th=[ 7701], 50.00th=[ 8455], 60.00th=[ 9110], 00:20:49.902 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10945], 95.00th=[12125], 00:20:49.902 | 99.00th=[13829], 99.50th=[15270], 99.90th=[18220], 99.95th=[19268], 00:20:49.902 | 99.99th=[21890] 00:20:49.902 bw ( KiB/s): min=65728, max=79520, per=50.25%, avg=72490.50, stdev=6027.83, samples=4 00:20:49.902 iops : min= 4108, max= 4970, avg=4530.50, stdev=376.65, samples=4 00:20:49.902 write: IOPS=5351, BW=83.6MiB/s (87.7MB/s)(147MiB/1762msec); 0 zone resets 00:20:49.902 slat (usec): min=29, max=320, avg=34.71, stdev= 9.18 00:20:49.902 clat (usec): min=4862, max=22267, avg=10257.46, stdev=2028.56 00:20:49.902 lat (usec): min=4892, max=22299, avg=10292.17, stdev=2030.52 00:20:49.902 clat percentiles (usec): 00:20:49.902 | 1.00th=[ 6980], 5.00th=[ 7701], 10.00th=[ 8029], 20.00th=[ 8586], 00:20:49.903 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10290], 00:20:49.903 | 70.00th=[10945], 80.00th=[11731], 90.00th=[13173], 95.00th=[14091], 00:20:49.903 | 99.00th=[15664], 99.50th=[16057], 99.90th=[22152], 99.95th=[22152], 00:20:49.903 | 99.99th=[22152] 00:20:49.903 bw ( KiB/s): min=67456, max=83296, per=88.05%, avg=75400.50, stdev=6994.66, samples=4 00:20:49.903 iops : min= 4216, max= 5206, avg=4712.50, stdev=437.15, samples=4 00:20:49.903 lat (msec) : 4=0.29%, 10=67.99%, 20=31.58%, 50=0.14% 00:20:49.903 cpu : usr=70.32%, sys=19.65%, ctx=7, majf=0, minf=22 00:20:49.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:20:49.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:49.903 issued rwts: total=18085,9430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:49.903 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:49.903 00:20:49.903 Run status group 0 (all jobs): 00:20:49.903 READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=283MiB (296MB), run=2006-2006msec 00:20:49.903 WRITE: bw=83.6MiB/s (87.7MB/s), 83.6MiB/s-83.6MiB/s (87.7MB/s-87.7MB/s), io=147MiB (155MB), run=1762-1762msec 00:20:49.903 19:21:27 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:49.903 19:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.903 19:21:27 -- common/autotest_common.sh@10 -- # set +x 00:20:49.903 19:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.903 19:21:27 -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:20:49.903 19:21:27 -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:20:49.903 19:21:27 -- host/fio.sh@49 -- # get_nvme_bdfs 00:20:49.903 19:21:27 -- common/autotest_common.sh@1496 -- # bdfs=() 00:20:49.903 19:21:27 -- common/autotest_common.sh@1496 -- # local bdfs 00:20:49.903 19:21:27 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:49.903 19:21:27 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:49.903 19:21:27 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:20:49.903 19:21:27 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:20:49.903 19:21:27 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:20:49.903 19:21:27 -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:20:49.903 19:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.903 19:21:27 -- common/autotest_common.sh@10 -- # set +x 00:20:49.903 Nvme0n1 00:20:49.903 19:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.903 19:21:27 -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:20:49.903 19:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.903 19:21:27 -- common/autotest_common.sh@10 -- # set +x 00:20:49.903 19:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.903 19:21:27 -- host/fio.sh@51 -- # ls_guid=04617c39-929c-4c64-bb9b-2f8ab4529f7e 00:20:49.903 19:21:27 -- host/fio.sh@52 -- # get_lvs_free_mb 04617c39-929c-4c64-bb9b-2f8ab4529f7e 00:20:49.903 19:21:27 -- common/autotest_common.sh@1341 -- # local lvs_uuid=04617c39-929c-4c64-bb9b-2f8ab4529f7e 00:20:49.903 19:21:27 -- common/autotest_common.sh@1342 -- # local lvs_info 00:20:49.903 19:21:27 -- common/autotest_common.sh@1343 -- # local fc 00:20:49.903 19:21:27 -- common/autotest_common.sh@1344 -- # local cs 00:20:49.903 19:21:27 -- common/autotest_common.sh@1345 -- # rpc_cmd bdev_lvol_get_lvstores 00:20:49.903 19:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.903 19:21:27 -- common/autotest_common.sh@10 -- # set +x 00:20:49.903 19:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.903 19:21:27 -- common/autotest_common.sh@1345 -- # lvs_info='[ 00:20:49.903 { 00:20:49.903 "base_bdev": "Nvme0n1", 00:20:49.903 "block_size": 4096, 00:20:49.903 "cluster_size": 1073741824, 00:20:49.903 "free_clusters": 4, 00:20:49.903 "name": "lvs_0", 00:20:49.903 "total_data_clusters": 4, 00:20:49.903 "uuid": "04617c39-929c-4c64-bb9b-2f8ab4529f7e" 00:20:49.903 } 00:20:49.903 ]' 00:20:49.903 19:21:27 -- common/autotest_common.sh@1346 -- # jq '.[] | select(.uuid=="04617c39-929c-4c64-bb9b-2f8ab4529f7e") .free_clusters' 00:20:49.903 19:21:27 -- common/autotest_common.sh@1346 -- # fc=4 00:20:49.903 19:21:27 -- common/autotest_common.sh@1347 -- # jq '.[] | select(.uuid=="04617c39-929c-4c64-bb9b-2f8ab4529f7e") .cluster_size' 00:20:49.903 19:21:27 -- common/autotest_common.sh@1347 -- # cs=1073741824 00:20:49.903 19:21:27 -- common/autotest_common.sh@1350 -- # free_mb=4096 00:20:49.903 4096 00:20:49.903 19:21:27 -- common/autotest_common.sh@1351 -- # echo 4096 00:20:49.903 19:21:27 -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 4096 00:20:49.903 19:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.903 19:21:27 -- common/autotest_common.sh@10 -- # set +x 00:20:49.903 7a82ce18-dca0-40b4-a330-fc94cb450b9d 00:20:49.903 19:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.903 19:21:27 -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:20:49.903 19:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.903 19:21:27 -- common/autotest_common.sh@10 -- # set +x 00:20:49.903 19:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.903 19:21:27 -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:20:49.903 19:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.903 19:21:27 -- common/autotest_common.sh@10 -- # set +x 00:20:49.903 19:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.903 19:21:27 -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:49.903 19:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.903 19:21:27 -- common/autotest_common.sh@10 -- # set +x 00:20:49.903 19:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.903 19:21:27 -- host/fio.sh@57 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:49.903 19:21:27 -- common/autotest_common.sh@1337 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:49.903 19:21:27 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:20:49.903 19:21:27 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:49.903 19:21:27 -- common/autotest_common.sh@1316 -- # local sanitizers 00:20:49.903 19:21:27 -- common/autotest_common.sh@1317 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:49.903 19:21:27 -- common/autotest_common.sh@1318 -- # shift 00:20:49.903 19:21:27 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:20:49.903 19:21:27 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:20:49.903 19:21:27 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:49.903 19:21:27 -- common/autotest_common.sh@1322 -- # grep libasan 00:20:49.903 19:21:27 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:20:50.161 19:21:27 -- common/autotest_common.sh@1322 -- # asan_lib= 00:20:50.161 19:21:27 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:20:50.161 19:21:27 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:20:50.161 19:21:27 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:50.161 19:21:27 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:20:50.161 19:21:27 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:20:50.161 19:21:27 -- common/autotest_common.sh@1322 -- # asan_lib= 00:20:50.161 19:21:27 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:20:50.161 19:21:27 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:50.161 19:21:27 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:50.161 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:50.161 fio-3.35 00:20:50.161 Starting 1 thread 00:20:52.695 00:20:52.695 test: (groupid=0, jobs=1): err= 0: pid=82778: Wed Feb 14 19:21:29 2024 00:20:52.695 read: IOPS=6242, BW=24.4MiB/s (25.6MB/s)(49.0MiB/2008msec) 00:20:52.695 slat (nsec): min=1702, max=338028, avg=2839.01, stdev=4541.84 00:20:52.695 clat (usec): min=4264, max=19579, avg=10895.67, stdev=1059.90 00:20:52.695 lat (usec): min=4272, max=19581, avg=10898.51, stdev=1059.71 00:20:52.695 clat percentiles (usec): 00:20:52.695 | 1.00th=[ 8586], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 00:20:52.695 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 00:20:52.695 | 70.00th=[11338], 80.00th=[11731], 90.00th=[12125], 95.00th=[12518], 00:20:52.695 | 99.00th=[13435], 99.50th=[14091], 99.90th=[18482], 99.95th=[18744], 00:20:52.695 | 99.99th=[18744] 00:20:52.695 bw ( KiB/s): min=24056, max=25592, per=99.82%, avg=24924.00, stdev=659.84, samples=4 00:20:52.695 iops : min= 6014, max= 6398, avg=6231.00, stdev=164.96, samples=4 00:20:52.696 write: IOPS=6235, BW=24.4MiB/s (25.5MB/s)(48.9MiB/2008msec); 0 zone resets 00:20:52.696 slat (nsec): min=1800, max=273992, avg=2974.56, stdev=3810.88 00:20:52.696 clat (usec): min=2386, max=17723, avg=9534.59, stdev=890.49 00:20:52.696 lat (usec): min=2398, max=17725, avg=9537.57, stdev=890.34 00:20:52.696 clat percentiles (usec): 00:20:52.696 | 1.00th=[ 7504], 5.00th=[ 8225], 10.00th=[ 8455], 20.00th=[ 8848], 00:20:52.696 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9765], 00:20:52.696 | 70.00th=[10028], 80.00th=[10159], 90.00th=[10552], 95.00th=[10945], 00:20:52.696 | 99.00th=[11600], 99.50th=[11863], 99.90th=[15401], 99.95th=[16712], 00:20:52.696 | 99.99th=[17695] 00:20:52.696 bw ( KiB/s): min=24696, max=25088, per=99.96%, avg=24930.00, stdev=176.05, samples=4 00:20:52.696 iops : min= 6174, max= 6272, avg=6232.50, stdev=44.01, samples=4 00:20:52.696 lat (msec) : 4=0.04%, 10=44.90%, 20=55.06% 00:20:52.696 cpu : usr=68.76%, sys=23.82%, ctx=6, majf=0, minf=26 00:20:52.696 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:20:52.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.696 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:52.696 issued rwts: total=12534,12520,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:52.696 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:52.696 00:20:52.696 Run status group 0 (all jobs): 00:20:52.696 READ: bw=24.4MiB/s (25.6MB/s), 24.4MiB/s-24.4MiB/s (25.6MB/s-25.6MB/s), io=49.0MiB (51.3MB), run=2008-2008msec 00:20:52.696 WRITE: bw=24.4MiB/s (25.5MB/s), 24.4MiB/s-24.4MiB/s (25.5MB/s-25.5MB/s), io=48.9MiB (51.3MB), run=2008-2008msec 00:20:52.696 19:21:29 -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:52.696 19:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.696 19:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.696 19:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.696 19:21:29 -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:20:52.696 19:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.696 19:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.696 19:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.696 19:21:29 -- host/fio.sh@62 -- # ls_nested_guid=1bfe2183-3f11-4f32-b0ea-bbf4c71c0700 00:20:52.696 19:21:29 -- host/fio.sh@63 -- # get_lvs_free_mb 1bfe2183-3f11-4f32-b0ea-bbf4c71c0700 00:20:52.696 19:21:29 -- common/autotest_common.sh@1341 -- # local lvs_uuid=1bfe2183-3f11-4f32-b0ea-bbf4c71c0700 00:20:52.696 19:21:29 -- common/autotest_common.sh@1342 -- # local lvs_info 00:20:52.696 19:21:29 -- common/autotest_common.sh@1343 -- # local fc 00:20:52.696 19:21:29 -- common/autotest_common.sh@1344 -- # local cs 00:20:52.696 19:21:29 -- common/autotest_common.sh@1345 -- # rpc_cmd bdev_lvol_get_lvstores 00:20:52.696 19:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.696 19:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.696 19:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.696 19:21:29 -- common/autotest_common.sh@1345 -- # lvs_info='[ 00:20:52.696 { 00:20:52.696 "base_bdev": "Nvme0n1", 00:20:52.696 "block_size": 4096, 00:20:52.696 "cluster_size": 1073741824, 00:20:52.696 "free_clusters": 0, 00:20:52.696 "name": "lvs_0", 00:20:52.696 "total_data_clusters": 4, 00:20:52.696 "uuid": "04617c39-929c-4c64-bb9b-2f8ab4529f7e" 00:20:52.696 }, 00:20:52.696 { 00:20:52.696 "base_bdev": "7a82ce18-dca0-40b4-a330-fc94cb450b9d", 00:20:52.696 "block_size": 4096, 00:20:52.696 "cluster_size": 4194304, 00:20:52.696 "free_clusters": 1022, 00:20:52.696 "name": "lvs_n_0", 00:20:52.696 "total_data_clusters": 1022, 00:20:52.696 "uuid": "1bfe2183-3f11-4f32-b0ea-bbf4c71c0700" 00:20:52.696 } 00:20:52.696 ]' 00:20:52.696 19:21:29 -- common/autotest_common.sh@1346 -- # jq '.[] | select(.uuid=="1bfe2183-3f11-4f32-b0ea-bbf4c71c0700") .free_clusters' 00:20:52.696 19:21:29 -- common/autotest_common.sh@1346 -- # fc=1022 00:20:52.696 19:21:29 -- common/autotest_common.sh@1347 -- # jq '.[] | select(.uuid=="1bfe2183-3f11-4f32-b0ea-bbf4c71c0700") .cluster_size' 00:20:52.696 4088 00:20:52.696 19:21:29 -- common/autotest_common.sh@1347 -- # cs=4194304 00:20:52.696 19:21:29 -- common/autotest_common.sh@1350 -- # free_mb=4088 00:20:52.696 19:21:29 -- common/autotest_common.sh@1351 -- # echo 4088 00:20:52.696 19:21:29 -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:20:52.696 19:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.696 19:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.696 a0ad1402-036c-4b7c-9e2a-26c42cc38ccb 00:20:52.696 19:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.696 19:21:29 -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:20:52.696 19:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.696 19:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.696 19:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.696 19:21:29 -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:20:52.696 19:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.696 19:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.696 19:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.696 19:21:29 -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:20:52.696 19:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.696 19:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.696 19:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.696 19:21:29 -- host/fio.sh@68 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:52.696 19:21:29 -- common/autotest_common.sh@1337 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:52.696 19:21:29 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:20:52.696 19:21:29 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:52.696 19:21:29 -- common/autotest_common.sh@1316 -- # local sanitizers 00:20:52.696 19:21:29 -- common/autotest_common.sh@1317 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:52.696 19:21:29 -- common/autotest_common.sh@1318 -- # shift 00:20:52.696 19:21:29 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:20:52.696 19:21:29 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:20:52.696 19:21:29 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:52.696 19:21:29 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:20:52.696 19:21:29 -- common/autotest_common.sh@1322 -- # grep libasan 00:20:52.696 19:21:29 -- common/autotest_common.sh@1322 -- # asan_lib= 00:20:52.696 19:21:29 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:20:52.696 19:21:29 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:20:52.696 19:21:29 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:52.696 19:21:29 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:20:52.696 19:21:29 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:20:52.696 19:21:29 -- common/autotest_common.sh@1322 -- # asan_lib= 00:20:52.696 19:21:29 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:20:52.696 19:21:29 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:52.696 19:21:29 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:52.955 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:52.955 fio-3.35 00:20:52.955 Starting 1 thread 00:20:55.493 00:20:55.493 test: (groupid=0, jobs=1): err= 0: pid=82833: Wed Feb 14 19:21:32 2024 00:20:55.493 read: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(48.0MiB/2009msec) 00:20:55.493 slat (nsec): min=1649, max=325066, avg=2985.53, stdev=4931.95 00:20:55.493 clat (usec): min=4292, max=18502, avg=11256.14, stdev=1115.28 00:20:55.493 lat (usec): min=4301, max=18504, avg=11259.13, stdev=1115.05 00:20:55.493 clat percentiles (usec): 00:20:55.493 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10290], 00:20:55.493 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11207], 60.00th=[11469], 00:20:55.493 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12649], 95.00th=[13042], 00:20:55.493 | 99.00th=[13829], 99.50th=[14091], 99.90th=[17171], 99.95th=[17433], 00:20:55.493 | 99.99th=[18482] 00:20:55.493 bw ( KiB/s): min=23488, max=24800, per=99.91%, avg=24456.00, stdev=646.04, samples=4 00:20:55.493 iops : min= 5872, max= 6200, avg=6114.00, stdev=161.51, samples=4 00:20:55.493 write: IOPS=6100, BW=23.8MiB/s (25.0MB/s)(47.9MiB/2009msec); 0 zone resets 00:20:55.493 slat (nsec): min=1731, max=373597, avg=3105.11, stdev=4824.84 00:20:55.493 clat (usec): min=2481, max=17473, avg=9621.24, stdev=946.89 00:20:55.493 lat (usec): min=2494, max=17475, avg=9624.34, stdev=946.83 00:20:55.493 clat percentiles (usec): 00:20:55.493 | 1.00th=[ 7570], 5.00th=[ 8160], 10.00th=[ 8455], 20.00th=[ 8848], 00:20:55.493 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9896], 00:20:55.493 | 70.00th=[10028], 80.00th=[10421], 90.00th=[10814], 95.00th=[11076], 00:20:55.493 | 99.00th=[11600], 99.50th=[11863], 99.90th=[16450], 99.95th=[16909], 00:20:55.493 | 99.99th=[17433] 00:20:55.493 bw ( KiB/s): min=24264, max=24608, per=99.95%, avg=24388.00, stdev=155.19, samples=4 00:20:55.493 iops : min= 6066, max= 6152, avg=6097.00, stdev=38.80, samples=4 00:20:55.493 lat (msec) : 4=0.04%, 10=39.15%, 20=60.82% 00:20:55.493 cpu : usr=65.79%, sys=25.55%, ctx=6, majf=0, minf=26 00:20:55.493 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:20:55.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:55.493 issued rwts: total=12294,12255,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.493 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:55.493 00:20:55.493 Run status group 0 (all jobs): 00:20:55.493 READ: bw=23.9MiB/s (25.1MB/s), 23.9MiB/s-23.9MiB/s (25.1MB/s-25.1MB/s), io=48.0MiB (50.4MB), run=2009-2009msec 00:20:55.493 WRITE: bw=23.8MiB/s (25.0MB/s), 23.8MiB/s-23.8MiB/s (25.0MB/s-25.0MB/s), io=47.9MiB (50.2MB), run=2009-2009msec 00:20:55.493 19:21:32 -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:20:55.493 19:21:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.493 19:21:32 -- common/autotest_common.sh@10 -- # set +x 00:20:55.493 19:21:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.493 19:21:32 -- host/fio.sh@72 -- # sync 00:20:55.493 19:21:32 -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:20:55.493 19:21:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.493 19:21:32 -- common/autotest_common.sh@10 -- # set +x 00:20:55.493 19:21:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.493 19:21:32 -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:20:55.493 19:21:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.493 19:21:32 -- common/autotest_common.sh@10 -- # set +x 00:20:55.493 19:21:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.493 19:21:32 -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:20:55.493 19:21:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.493 19:21:32 -- common/autotest_common.sh@10 -- # set +x 00:20:55.493 19:21:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.493 19:21:32 -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:20:55.493 19:21:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.493 19:21:32 -- common/autotest_common.sh@10 -- # set +x 00:20:55.493 19:21:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.493 19:21:32 -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:20:55.493 19:21:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.493 19:21:32 -- common/autotest_common.sh@10 -- # set +x 00:20:55.751 19:21:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.751 19:21:33 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:20:55.751 19:21:33 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:20:55.751 19:21:33 -- host/fio.sh@84 -- # nvmftestfini 00:20:55.751 19:21:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:55.752 19:21:33 -- nvmf/common.sh@116 -- # sync 00:20:55.752 19:21:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:55.752 19:21:33 -- nvmf/common.sh@119 -- # set +e 00:20:55.752 19:21:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:55.752 19:21:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:55.752 rmmod nvme_tcp 00:20:56.010 rmmod nvme_fabrics 00:20:56.010 rmmod nvme_keyring 00:20:56.010 19:21:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:56.010 19:21:33 -- nvmf/common.sh@123 -- # set -e 00:20:56.010 19:21:33 -- nvmf/common.sh@124 -- # return 0 00:20:56.010 19:21:33 -- nvmf/common.sh@477 -- # '[' -n 82576 ']' 00:20:56.010 19:21:33 -- nvmf/common.sh@478 -- # killprocess 82576 00:20:56.010 19:21:33 -- common/autotest_common.sh@924 -- # '[' -z 82576 ']' 00:20:56.010 19:21:33 -- common/autotest_common.sh@928 -- # kill -0 82576 00:20:56.010 19:21:33 -- common/autotest_common.sh@929 -- # uname 00:20:56.010 19:21:33 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:20:56.010 19:21:33 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 82576 00:20:56.010 killing process with pid 82576 00:20:56.010 19:21:33 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:20:56.010 19:21:33 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:20:56.010 19:21:33 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 82576' 00:20:56.010 19:21:33 -- common/autotest_common.sh@943 -- # kill 82576 00:20:56.010 19:21:33 -- common/autotest_common.sh@948 -- # wait 82576 00:20:56.269 19:21:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:56.269 19:21:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:56.269 19:21:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:56.269 19:21:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:56.269 19:21:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:56.269 19:21:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.269 19:21:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:56.269 19:21:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.269 19:21:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:56.269 00:20:56.269 real 0m13.150s 00:20:56.269 user 0m54.103s 00:20:56.269 sys 0m3.627s 00:20:56.269 19:21:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:56.269 ************************************ 00:20:56.269 19:21:33 -- common/autotest_common.sh@10 -- # set +x 00:20:56.269 END TEST nvmf_fio_host 00:20:56.269 ************************************ 00:20:56.269 19:21:33 -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:56.269 19:21:33 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:20:56.269 19:21:33 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:20:56.269 19:21:33 -- common/autotest_common.sh@10 -- # set +x 00:20:56.269 ************************************ 00:20:56.269 START TEST nvmf_failover 00:20:56.269 ************************************ 00:20:56.269 19:21:33 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:56.529 * Looking for test storage... 00:20:56.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:56.529 19:21:33 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:56.529 19:21:33 -- nvmf/common.sh@7 -- # uname -s 00:20:56.529 19:21:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:56.529 19:21:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:56.529 19:21:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:56.529 19:21:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:56.529 19:21:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:56.529 19:21:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:56.529 19:21:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:56.529 19:21:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:56.529 19:21:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:56.529 19:21:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:56.529 19:21:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:20:56.529 19:21:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:20:56.529 19:21:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:56.529 19:21:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:56.529 19:21:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:56.529 19:21:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:56.529 19:21:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:56.529 19:21:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:56.529 19:21:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:56.529 19:21:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.529 19:21:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.529 19:21:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.529 19:21:33 -- paths/export.sh@5 -- # export PATH 00:20:56.529 19:21:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.529 19:21:33 -- nvmf/common.sh@46 -- # : 0 00:20:56.529 19:21:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:56.529 19:21:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:56.529 19:21:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:56.529 19:21:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:56.529 19:21:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:56.529 19:21:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:56.529 19:21:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:56.529 19:21:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:56.529 19:21:33 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:56.529 19:21:33 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:56.529 19:21:33 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:56.529 19:21:33 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:56.529 19:21:33 -- host/failover.sh@18 -- # nvmftestinit 00:20:56.529 19:21:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:56.529 19:21:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:56.529 19:21:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:56.529 19:21:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:56.529 19:21:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:56.530 19:21:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.530 19:21:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:56.530 19:21:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.530 19:21:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:56.530 19:21:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:56.530 19:21:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:56.530 19:21:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:56.530 19:21:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:56.530 19:21:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:56.530 19:21:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:56.530 19:21:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:56.530 19:21:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:56.530 19:21:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:56.530 19:21:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:56.530 19:21:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:56.530 19:21:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:56.530 19:21:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:56.530 19:21:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:56.530 19:21:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:56.530 19:21:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:56.530 19:21:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:56.530 19:21:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:56.530 19:21:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:56.530 Cannot find device "nvmf_tgt_br" 00:20:56.530 19:21:33 -- nvmf/common.sh@154 -- # true 00:20:56.530 19:21:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:56.530 Cannot find device "nvmf_tgt_br2" 00:20:56.530 19:21:33 -- nvmf/common.sh@155 -- # true 00:20:56.530 19:21:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:56.530 19:21:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:56.530 Cannot find device "nvmf_tgt_br" 00:20:56.530 19:21:33 -- nvmf/common.sh@157 -- # true 00:20:56.530 19:21:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:56.530 Cannot find device "nvmf_tgt_br2" 00:20:56.530 19:21:33 -- nvmf/common.sh@158 -- # true 00:20:56.530 19:21:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:56.530 19:21:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:56.530 19:21:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:56.530 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:56.530 19:21:33 -- nvmf/common.sh@161 -- # true 00:20:56.530 19:21:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:56.530 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:56.530 19:21:33 -- nvmf/common.sh@162 -- # true 00:20:56.530 19:21:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:56.530 19:21:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:56.530 19:21:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:56.530 19:21:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:56.530 19:21:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:56.530 19:21:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:56.789 19:21:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:56.789 19:21:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:56.789 19:21:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:56.789 19:21:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:56.789 19:21:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:56.789 19:21:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:56.789 19:21:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:56.789 19:21:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:56.789 19:21:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:56.789 19:21:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:56.789 19:21:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:56.789 19:21:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:56.789 19:21:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:56.789 19:21:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:56.789 19:21:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:56.789 19:21:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:56.789 19:21:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:56.789 19:21:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:56.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:56.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:20:56.789 00:20:56.789 --- 10.0.0.2 ping statistics --- 00:20:56.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.789 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:20:56.789 19:21:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:56.789 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:56.789 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:20:56.789 00:20:56.789 --- 10.0.0.3 ping statistics --- 00:20:56.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.789 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:20:56.789 19:21:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:56.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:56.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:20:56.789 00:20:56.789 --- 10.0.0.1 ping statistics --- 00:20:56.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.789 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:20:56.789 19:21:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:56.789 19:21:34 -- nvmf/common.sh@421 -- # return 0 00:20:56.789 19:21:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:56.789 19:21:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:56.789 19:21:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:56.789 19:21:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:56.789 19:21:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:56.789 19:21:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:56.789 19:21:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:56.789 19:21:34 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:56.789 19:21:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:56.789 19:21:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:56.789 19:21:34 -- common/autotest_common.sh@10 -- # set +x 00:20:56.789 19:21:34 -- nvmf/common.sh@469 -- # nvmfpid=83050 00:20:56.789 19:21:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:56.789 19:21:34 -- nvmf/common.sh@470 -- # waitforlisten 83050 00:20:56.790 19:21:34 -- common/autotest_common.sh@817 -- # '[' -z 83050 ']' 00:20:56.790 19:21:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.790 19:21:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:56.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.790 19:21:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.790 19:21:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:56.790 19:21:34 -- common/autotest_common.sh@10 -- # set +x 00:20:56.790 [2024-02-14 19:21:34.165021] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:20:56.790 [2024-02-14 19:21:34.165093] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.048 [2024-02-14 19:21:34.296449] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:57.049 [2024-02-14 19:21:34.394207] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:57.049 [2024-02-14 19:21:34.394533] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:57.049 [2024-02-14 19:21:34.394678] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:57.049 [2024-02-14 19:21:34.394896] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:57.049 [2024-02-14 19:21:34.395202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:57.049 [2024-02-14 19:21:34.395342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:57.049 [2024-02-14 19:21:34.395350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.984 19:21:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:57.984 19:21:35 -- common/autotest_common.sh@850 -- # return 0 00:20:57.984 19:21:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:57.984 19:21:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:57.984 19:21:35 -- common/autotest_common.sh@10 -- # set +x 00:20:57.984 19:21:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.984 19:21:35 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:58.243 [2024-02-14 19:21:35.409699] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.243 19:21:35 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:58.243 Malloc0 00:20:58.243 19:21:35 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:58.502 19:21:35 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:58.761 19:21:36 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:59.020 [2024-02-14 19:21:36.210112] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.020 19:21:36 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:59.020 [2024-02-14 19:21:36.402264] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:59.020 19:21:36 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:59.279 [2024-02-14 19:21:36.594555] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:20:59.279 19:21:36 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:20:59.279 19:21:36 -- host/failover.sh@31 -- # bdevperf_pid=83162 00:20:59.279 19:21:36 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:59.279 19:21:36 -- host/failover.sh@34 -- # waitforlisten 83162 /var/tmp/bdevperf.sock 00:20:59.279 19:21:36 -- common/autotest_common.sh@817 -- # '[' -z 83162 ']' 00:20:59.279 19:21:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:59.279 19:21:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:59.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:59.279 19:21:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:59.279 19:21:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:59.279 19:21:36 -- common/autotest_common.sh@10 -- # set +x 00:21:00.218 19:21:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:00.218 19:21:37 -- common/autotest_common.sh@850 -- # return 0 00:21:00.218 19:21:37 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:00.786 NVMe0n1 00:21:00.786 19:21:37 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:01.062 00:21:01.062 19:21:38 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:01.062 19:21:38 -- host/failover.sh@39 -- # run_test_pid=83208 00:21:01.062 19:21:38 -- host/failover.sh@41 -- # sleep 1 00:21:02.049 19:21:39 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:02.310 [2024-02-14 19:21:39.497667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.497724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.497734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.497743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.497752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.497760] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.497769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.497777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.497786] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.497794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.497802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.497810] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.497818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.497839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.497847] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.497870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.497878] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.497908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.497930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.497937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.497944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.497952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.497960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.497967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.497974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.497982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.497989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.497996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498029] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498043] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498105] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498169] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 [2024-02-14 19:21:39.498274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bfe90 is same with the state(5) to be set 00:21:02.310 19:21:39 -- host/failover.sh@45 -- # sleep 3 00:21:05.602 19:21:42 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:05.602 00:21:05.602 19:21:42 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:05.861 [2024-02-14 19:21:43.023130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023513] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023576] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023617] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023709] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023717] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023732] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023755] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023810] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 [2024-02-14 19:21:43.023835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0d20 is same with the state(5) to be set 00:21:05.862 19:21:43 -- host/failover.sh@50 -- # sleep 3 00:21:09.146 19:21:46 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:09.146 [2024-02-14 19:21:46.272450] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.146 19:21:46 -- host/failover.sh@55 -- # sleep 1 00:21:10.082 19:21:47 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:10.342 [2024-02-14 19:21:47.528600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528673] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528697] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528737] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528810] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528933] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.342 [2024-02-14 19:21:47.528957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.528964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.528971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.528979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.528986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.528993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529044] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529133] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529175] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529211] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529327] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 [2024-02-14 19:21:47.529334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1870 is same with the state(5) to be set 00:21:10.343 19:21:47 -- host/failover.sh@59 -- # wait 83208 00:21:16.921 0 00:21:16.921 19:21:53 -- host/failover.sh@61 -- # killprocess 83162 00:21:16.921 19:21:53 -- common/autotest_common.sh@924 -- # '[' -z 83162 ']' 00:21:16.921 19:21:53 -- common/autotest_common.sh@928 -- # kill -0 83162 00:21:16.921 19:21:53 -- common/autotest_common.sh@929 -- # uname 00:21:16.921 19:21:53 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:21:16.921 19:21:53 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 83162 00:21:16.921 killing process with pid 83162 00:21:16.921 19:21:53 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:21:16.921 19:21:53 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:21:16.921 19:21:53 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 83162' 00:21:16.921 19:21:53 -- common/autotest_common.sh@943 -- # kill 83162 00:21:16.921 19:21:53 -- common/autotest_common.sh@948 -- # wait 83162 00:21:16.921 19:21:53 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:16.921 [2024-02-14 19:21:36.653639] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:21:16.921 [2024-02-14 19:21:36.654183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83162 ] 00:21:16.921 [2024-02-14 19:21:36.784682] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.921 [2024-02-14 19:21:36.868121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.921 Running I/O for 15 seconds... 00:21:16.921 [2024-02-14 19:21:39.500679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.921 [2024-02-14 19:21:39.500738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.921 [2024-02-14 19:21:39.500769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.921 [2024-02-14 19:21:39.500787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.921 [2024-02-14 19:21:39.500805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.921 [2024-02-14 19:21:39.500820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.921 [2024-02-14 19:21:39.500836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.921 [2024-02-14 19:21:39.500851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.921 [2024-02-14 19:21:39.500886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.921 [2024-02-14 19:21:39.500899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.921 [2024-02-14 19:21:39.500914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.500928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.500943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.500956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.500971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.500985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.500999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.501013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.501027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.501041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.501056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.501069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.501104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.501120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.501134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.501147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.501162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.501175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.501190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.501204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.501218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.501231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.501245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.501258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.501273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.501286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.501299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.501312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.501326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.501339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.501353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.501366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.501380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.501393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.501407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.501420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.501434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.501455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.501471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.501484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.503665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.503771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.503855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.503952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.504039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.504125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.504206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.504291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.504376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.504460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.504575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.504668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.504751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.504837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.504922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.505011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.505096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.505186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.505268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.505354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.505427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.505531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.505617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.505721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.505805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.505890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.505972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.506056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.506137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.506222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.506303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.922 [2024-02-14 19:21:39.506393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.506476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.506589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.506673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.506759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.506859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.922 [2024-02-14 19:21:39.506952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.922 [2024-02-14 19:21:39.507035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.922 [2024-02-14 19:21:39.507128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.507210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.923 [2024-02-14 19:21:39.507299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.507371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.923 [2024-02-14 19:21:39.507456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.507554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.923 [2024-02-14 19:21:39.507642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.507727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.923 [2024-02-14 19:21:39.507813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.507909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.923 [2024-02-14 19:21:39.507994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.508065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.923 [2024-02-14 19:21:39.508150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.508230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.923 [2024-02-14 19:21:39.508316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.508402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.923 [2024-02-14 19:21:39.508504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.508589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.923 [2024-02-14 19:21:39.508677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.508787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.923 [2024-02-14 19:21:39.508881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.508965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.923 [2024-02-14 19:21:39.509052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.509124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.923 [2024-02-14 19:21:39.509226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.509308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.923 [2024-02-14 19:21:39.509397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.509478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.923 [2024-02-14 19:21:39.509582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.509669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.923 [2024-02-14 19:21:39.509760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.509832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.923 [2024-02-14 19:21:39.509923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.510004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.923 [2024-02-14 19:21:39.510105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.510186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.923 [2024-02-14 19:21:39.510272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.510343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.923 [2024-02-14 19:21:39.510431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.510530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.923 [2024-02-14 19:21:39.510623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.510695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.923 [2024-02-14 19:21:39.510777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.510904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.923 [2024-02-14 19:21:39.510988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.511069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.923 [2024-02-14 19:21:39.511181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.511269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.923 [2024-02-14 19:21:39.511295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.511312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.923 [2024-02-14 19:21:39.511327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.511342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.923 [2024-02-14 19:21:39.511356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.511372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.923 [2024-02-14 19:21:39.511386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.511401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.923 [2024-02-14 19:21:39.511415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.511430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.923 [2024-02-14 19:21:39.511444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.511459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.923 [2024-02-14 19:21:39.511484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.511501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.923 [2024-02-14 19:21:39.511515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.512140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.923 [2024-02-14 19:21:39.512337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.512422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.923 [2024-02-14 19:21:39.512519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.512608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.923 [2024-02-14 19:21:39.512694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.512774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.923 [2024-02-14 19:21:39.512854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.512933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.923 [2024-02-14 19:21:39.513016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.513121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.923 [2024-02-14 19:21:39.513205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.513292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.923 [2024-02-14 19:21:39.513385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.923 [2024-02-14 19:21:39.513466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.923 [2024-02-14 19:21:39.513585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.513673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.924 [2024-02-14 19:21:39.513754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.513876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.924 [2024-02-14 19:21:39.513965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.514036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.924 [2024-02-14 19:21:39.514123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.514217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.924 [2024-02-14 19:21:39.514297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.514376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.924 [2024-02-14 19:21:39.514459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.514591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.924 [2024-02-14 19:21:39.514677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.514764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.924 [2024-02-14 19:21:39.514791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.514823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.924 [2024-02-14 19:21:39.514839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.514855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.924 [2024-02-14 19:21:39.514872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.514888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.924 [2024-02-14 19:21:39.514902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.514917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.924 [2024-02-14 19:21:39.514931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.514946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.924 [2024-02-14 19:21:39.514960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.514976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.924 [2024-02-14 19:21:39.514990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.515005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.924 [2024-02-14 19:21:39.515019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.515034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.924 [2024-02-14 19:21:39.515049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.515064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.924 [2024-02-14 19:21:39.515105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.515121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.924 [2024-02-14 19:21:39.515135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.515151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.924 [2024-02-14 19:21:39.515165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.515180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.924 [2024-02-14 19:21:39.515194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.515209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.924 [2024-02-14 19:21:39.515226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.515241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.924 [2024-02-14 19:21:39.515255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.515270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.924 [2024-02-14 19:21:39.515284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.515299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.924 [2024-02-14 19:21:39.515313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.515328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.924 [2024-02-14 19:21:39.515342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.515357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.924 [2024-02-14 19:21:39.515371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.515386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.924 [2024-02-14 19:21:39.515399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.515415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.924 [2024-02-14 19:21:39.515428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.515443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.924 [2024-02-14 19:21:39.515457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.515488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.924 [2024-02-14 19:21:39.515557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.515576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.924 [2024-02-14 19:21:39.515591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.515607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.924 [2024-02-14 19:21:39.515621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.515637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.924 [2024-02-14 19:21:39.515651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.515667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.924 [2024-02-14 19:21:39.515681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.515697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.924 [2024-02-14 19:21:39.515711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.515727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.924 [2024-02-14 19:21:39.515742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.515757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.924 [2024-02-14 19:21:39.515772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.515787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.924 [2024-02-14 19:21:39.515802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.515817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.924 [2024-02-14 19:21:39.515832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.515873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.924 [2024-02-14 19:21:39.515888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.924 [2024-02-14 19:21:39.515905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.925 [2024-02-14 19:21:39.515926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:39.515942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.925 [2024-02-14 19:21:39.515956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:39.515980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.925 [2024-02-14 19:21:39.515995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:39.516011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.925 [2024-02-14 19:21:39.516026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:39.516041] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd8250 is same with the state(5) to be set 00:21:16.925 [2024-02-14 19:21:39.516060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.925 [2024-02-14 19:21:39.516072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.925 [2024-02-14 19:21:39.516084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5568 len:8 PRP1 0x0 PRP2 0x0 00:21:16.925 [2024-02-14 19:21:39.516097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:39.516183] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdd8250 was disconnected and freed. reset controller. 00:21:16.925 [2024-02-14 19:21:39.516204] bdev_nvme.c:1829:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:16.925 [2024-02-14 19:21:39.516270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.925 [2024-02-14 19:21:39.516309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:39.516344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.925 [2024-02-14 19:21:39.516359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:39.516375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.925 [2024-02-14 19:21:39.516391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:39.516407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.925 [2024-02-14 19:21:39.516422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:39.516437] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:16.925 [2024-02-14 19:21:39.516510] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd73170 (9): Bad file descriptor 00:21:16.925 [2024-02-14 19:21:39.518650] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:16.925 [2024-02-14 19:21:39.551275] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:16.925 [2024-02-14 19:21:43.024013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.925 [2024-02-14 19:21:43.024069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:43.024100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.925 [2024-02-14 19:21:43.024118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:43.024170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.925 [2024-02-14 19:21:43.024187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:43.024205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.925 [2024-02-14 19:21:43.024221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:43.024238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.925 [2024-02-14 19:21:43.024253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:43.024270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:48392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.925 [2024-02-14 19:21:43.024285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:43.024301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:48400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.925 [2024-02-14 19:21:43.024317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:43.024333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.925 [2024-02-14 19:21:43.024348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:43.024380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:48424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.925 [2024-02-14 19:21:43.024410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:43.024426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:48440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.925 [2024-02-14 19:21:43.024440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:43.024455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.925 [2024-02-14 19:21:43.024469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:43.024485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.925 [2024-02-14 19:21:43.024500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:43.024541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.925 [2024-02-14 19:21:43.024568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:43.024584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:48496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.925 [2024-02-14 19:21:43.024599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:43.024615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.925 [2024-02-14 19:21:43.024642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:43.024660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.925 [2024-02-14 19:21:43.024675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:43.024691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:48568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.925 [2024-02-14 19:21:43.024714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:43.024731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.925 [2024-02-14 19:21:43.024746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:43.024762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.925 [2024-02-14 19:21:43.024776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:43.024792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.925 [2024-02-14 19:21:43.024807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:43.024823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.925 [2024-02-14 19:21:43.024837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:43.024853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.925 [2024-02-14 19:21:43.024873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:43.024889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.925 [2024-02-14 19:21:43.024915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:43.024935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:49072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.925 [2024-02-14 19:21:43.024950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:43.024967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.925 [2024-02-14 19:21:43.024982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:43.024998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.925 [2024-02-14 19:21:43.025013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.925 [2024-02-14 19:21:43.025029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.925 [2024-02-14 19:21:43.025043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.025069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.926 [2024-02-14 19:21:43.025084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.025101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.926 [2024-02-14 19:21:43.025115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.025131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.926 [2024-02-14 19:21:43.025146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.025162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.926 [2024-02-14 19:21:43.025177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.025193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.926 [2024-02-14 19:21:43.025207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.025223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.926 [2024-02-14 19:21:43.025245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.025261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.926 [2024-02-14 19:21:43.025276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.025292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.926 [2024-02-14 19:21:43.025306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.025322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.926 [2024-02-14 19:21:43.025337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.025352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.926 [2024-02-14 19:21:43.025368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.025384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.926 [2024-02-14 19:21:43.025398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.025414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.926 [2024-02-14 19:21:43.025428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.025444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.926 [2024-02-14 19:21:43.025459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.025483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.926 [2024-02-14 19:21:43.025513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.025529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:49216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.926 [2024-02-14 19:21:43.025544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.025561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.926 [2024-02-14 19:21:43.025576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.025592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.926 [2024-02-14 19:21:43.025607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.025623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.926 [2024-02-14 19:21:43.025637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.025653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:49248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.926 [2024-02-14 19:21:43.025668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.025685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.926 [2024-02-14 19:21:43.025699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.025716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.926 [2024-02-14 19:21:43.025731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.025747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.926 [2024-02-14 19:21:43.025763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.025778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.926 [2024-02-14 19:21:43.025792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.025808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:49288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.926 [2024-02-14 19:21:43.025823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.025839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.926 [2024-02-14 19:21:43.025853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.025869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.926 [2024-02-14 19:21:43.025892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.025909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.926 [2024-02-14 19:21:43.025923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.025938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.926 [2024-02-14 19:21:43.025953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.025968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.926 [2024-02-14 19:21:43.025982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.025998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.926 [2024-02-14 19:21:43.026012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.926 [2024-02-14 19:21:43.026027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.927 [2024-02-14 19:21:43.026041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.026057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.927 [2024-02-14 19:21:43.026072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.026087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.927 [2024-02-14 19:21:43.026101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.026117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.927 [2024-02-14 19:21:43.026131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.026146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.927 [2024-02-14 19:21:43.026161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.026176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:48624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.927 [2024-02-14 19:21:43.026190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.026205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.927 [2024-02-14 19:21:43.026220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.026235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.927 [2024-02-14 19:21:43.026250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.026274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.927 [2024-02-14 19:21:43.026291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.026308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.927 [2024-02-14 19:21:43.026323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.026340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.927 [2024-02-14 19:21:43.026354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.026370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.927 [2024-02-14 19:21:43.026385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.026402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.927 [2024-02-14 19:21:43.026416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.026433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.927 [2024-02-14 19:21:43.026448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.026464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.927 [2024-02-14 19:21:43.026479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.026514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.927 [2024-02-14 19:21:43.026530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.026546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.927 [2024-02-14 19:21:43.026561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.026577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:48888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.927 [2024-02-14 19:21:43.026591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.026608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.927 [2024-02-14 19:21:43.026622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.026638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.927 [2024-02-14 19:21:43.026653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.026669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.927 [2024-02-14 19:21:43.026691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.026709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.927 [2024-02-14 19:21:43.026724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.026740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.927 [2024-02-14 19:21:43.026755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.026771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.927 [2024-02-14 19:21:43.026786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.026839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.927 [2024-02-14 19:21:43.026860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.026877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.927 [2024-02-14 19:21:43.026892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.026909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.927 [2024-02-14 19:21:43.026924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.026940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.927 [2024-02-14 19:21:43.026955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.026971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.927 [2024-02-14 19:21:43.026986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.027002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.927 [2024-02-14 19:21:43.027017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.027033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.927 [2024-02-14 19:21:43.027048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.027064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.927 [2024-02-14 19:21:43.027078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.027095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.927 [2024-02-14 19:21:43.027109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.027140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.927 [2024-02-14 19:21:43.027169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.027186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.927 [2024-02-14 19:21:43.027201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.027217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.927 [2024-02-14 19:21:43.027231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.027249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.927 [2024-02-14 19:21:43.027263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.027279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.927 [2024-02-14 19:21:43.027293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.027309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.927 [2024-02-14 19:21:43.027323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.927 [2024-02-14 19:21:43.027339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.927 [2024-02-14 19:21:43.027353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.027369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.928 [2024-02-14 19:21:43.027383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.027399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.928 [2024-02-14 19:21:43.027413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.027428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.928 [2024-02-14 19:21:43.027443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.027459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.928 [2024-02-14 19:21:43.027473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.027488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.928 [2024-02-14 19:21:43.027519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.027549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.928 [2024-02-14 19:21:43.027567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.027593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.928 [2024-02-14 19:21:43.027608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.027625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.928 [2024-02-14 19:21:43.027640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.027655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.928 [2024-02-14 19:21:43.027670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.027686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.928 [2024-02-14 19:21:43.027701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.027725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.928 [2024-02-14 19:21:43.027741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.027757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.928 [2024-02-14 19:21:43.027772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.027788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.928 [2024-02-14 19:21:43.027803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.027819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.928 [2024-02-14 19:21:43.027833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.027850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.928 [2024-02-14 19:21:43.027879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.027896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.928 [2024-02-14 19:21:43.027910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.027925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.928 [2024-02-14 19:21:43.027940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.027956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.928 [2024-02-14 19:21:43.027970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.027986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.928 [2024-02-14 19:21:43.028008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.028025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.928 [2024-02-14 19:21:43.028040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.028056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.928 [2024-02-14 19:21:43.028071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.028086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.928 [2024-02-14 19:21:43.028101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.028117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.928 [2024-02-14 19:21:43.028131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.028147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:49720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.928 [2024-02-14 19:21:43.028167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.028184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.928 [2024-02-14 19:21:43.028199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.028215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.928 [2024-02-14 19:21:43.028229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.028251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.928 [2024-02-14 19:21:43.028266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.028282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.928 [2024-02-14 19:21:43.028297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.028313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:49040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.928 [2024-02-14 19:21:43.028327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.028343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.928 [2024-02-14 19:21:43.028357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.028393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.928 [2024-02-14 19:21:43.028411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.928 [2024-02-14 19:21:43.028423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49064 len:8 PRP1 0x0 PRP2 0x0 00:21:16.928 [2024-02-14 19:21:43.028549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.028625] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd768e0 was disconnected and freed. reset controller. 00:21:16.928 [2024-02-14 19:21:43.028646] bdev_nvme.c:1829:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:16.928 [2024-02-14 19:21:43.028707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.928 [2024-02-14 19:21:43.028730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.028746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.928 [2024-02-14 19:21:43.028760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.028775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.928 [2024-02-14 19:21:43.028789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.028805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.928 [2024-02-14 19:21:43.028819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:43.028834] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:16.928 [2024-02-14 19:21:43.028888] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd73170 (9): Bad file descriptor 00:21:16.928 [2024-02-14 19:21:43.031012] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:16.928 [2024-02-14 19:21:43.049603] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:16.928 [2024-02-14 19:21:47.529430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.928 [2024-02-14 19:21:47.529491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.928 [2024-02-14 19:21:47.529556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.529576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.529594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.529609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.529626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.529641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.529657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.529672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.529689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.529733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.529751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.529766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.529782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.529806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.529823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.529854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.529871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.529885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.529900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.529915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.529930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.529944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.529959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.529974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.529989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.530004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.530020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.530035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.530050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.530065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.530081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.530103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.530120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.530134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.530158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.530174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.530191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.530205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.530220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.530234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.530251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.530265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.530281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.530295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.530310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.530325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.530341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.530356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.530372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.530385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.530402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.530416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.530432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.530447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.530463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.530477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.530493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.530507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.530550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.530568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.530593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.530608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.530624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.530644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.530661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.530675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.530692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.530706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.530722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.530736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.530753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.530767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.530783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.530806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.530828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.530849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.530865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.530880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.929 [2024-02-14 19:21:47.530895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.929 [2024-02-14 19:21:47.530910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.530926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.930 [2024-02-14 19:21:47.530940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.530956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.930 [2024-02-14 19:21:47.530971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.530986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.930 [2024-02-14 19:21:47.531009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.531026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.930 [2024-02-14 19:21:47.531040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.531056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.930 [2024-02-14 19:21:47.531070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.531086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.930 [2024-02-14 19:21:47.531109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.531125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.930 [2024-02-14 19:21:47.531139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.531155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.930 [2024-02-14 19:21:47.531175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.531192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.930 [2024-02-14 19:21:47.531207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.531222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.930 [2024-02-14 19:21:47.531241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.531256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.930 [2024-02-14 19:21:47.531270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.531287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.930 [2024-02-14 19:21:47.531302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.531317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.930 [2024-02-14 19:21:47.531331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.531347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.930 [2024-02-14 19:21:47.531362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.531377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.930 [2024-02-14 19:21:47.531392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.531415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.930 [2024-02-14 19:21:47.531431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.531446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.930 [2024-02-14 19:21:47.531461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.531477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.930 [2024-02-14 19:21:47.531511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.531529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.930 [2024-02-14 19:21:47.531544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.531560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.930 [2024-02-14 19:21:47.531574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.531590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.930 [2024-02-14 19:21:47.531604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.531620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.930 [2024-02-14 19:21:47.531634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.531650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.930 [2024-02-14 19:21:47.531664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.531680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.930 [2024-02-14 19:21:47.531701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.531717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.930 [2024-02-14 19:21:47.531731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.531747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.930 [2024-02-14 19:21:47.531761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.531777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.930 [2024-02-14 19:21:47.531793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.531809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.930 [2024-02-14 19:21:47.531827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.531847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.930 [2024-02-14 19:21:47.531862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.531878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.930 [2024-02-14 19:21:47.531894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.531910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.930 [2024-02-14 19:21:47.531925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.531940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.930 [2024-02-14 19:21:47.531955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.531971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.930 [2024-02-14 19:21:47.531986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.532002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.930 [2024-02-14 19:21:47.532016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.930 [2024-02-14 19:21:47.532031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.931 [2024-02-14 19:21:47.532046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.532062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.931 [2024-02-14 19:21:47.532076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.532092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.931 [2024-02-14 19:21:47.532107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.532123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.931 [2024-02-14 19:21:47.532137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.532153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.931 [2024-02-14 19:21:47.532168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.532183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.931 [2024-02-14 19:21:47.532203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.532228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.931 [2024-02-14 19:21:47.532244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.532260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.931 [2024-02-14 19:21:47.532275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.532291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.931 [2024-02-14 19:21:47.532305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.532320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.931 [2024-02-14 19:21:47.532335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.532351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.931 [2024-02-14 19:21:47.532365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.532381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.931 [2024-02-14 19:21:47.532395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.532411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.931 [2024-02-14 19:21:47.532426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.532442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.931 [2024-02-14 19:21:47.532457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.532473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.931 [2024-02-14 19:21:47.532502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.532533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.931 [2024-02-14 19:21:47.532549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.532565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.931 [2024-02-14 19:21:47.532579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.532595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.931 [2024-02-14 19:21:47.532610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.532626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.931 [2024-02-14 19:21:47.532641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.532674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.931 [2024-02-14 19:21:47.532691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.532707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.931 [2024-02-14 19:21:47.532721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.532737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.931 [2024-02-14 19:21:47.532758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.532775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.931 [2024-02-14 19:21:47.532790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.532805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.931 [2024-02-14 19:21:47.532820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.532836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.931 [2024-02-14 19:21:47.532851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.532866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.931 [2024-02-14 19:21:47.532880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.532896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.931 [2024-02-14 19:21:47.532911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.532926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.931 [2024-02-14 19:21:47.532941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.532956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.931 [2024-02-14 19:21:47.532970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.532986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.931 [2024-02-14 19:21:47.533000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.533017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.931 [2024-02-14 19:21:47.533031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.533046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.931 [2024-02-14 19:21:47.533068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.533085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.931 [2024-02-14 19:21:47.533099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.533115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.931 [2024-02-14 19:21:47.533130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.533145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.931 [2024-02-14 19:21:47.533160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.533182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.931 [2024-02-14 19:21:47.533197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.533213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.931 [2024-02-14 19:21:47.533227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.533243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.931 [2024-02-14 19:21:47.533263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.533279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.931 [2024-02-14 19:21:47.533294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.533309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.931 [2024-02-14 19:21:47.533323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.931 [2024-02-14 19:21:47.533339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.932 [2024-02-14 19:21:47.533354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.932 [2024-02-14 19:21:47.533369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.932 [2024-02-14 19:21:47.533383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.932 [2024-02-14 19:21:47.533399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.932 [2024-02-14 19:21:47.533413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.932 [2024-02-14 19:21:47.533429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.932 [2024-02-14 19:21:47.533443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.932 [2024-02-14 19:21:47.533466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.932 [2024-02-14 19:21:47.533481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.932 [2024-02-14 19:21:47.533512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.932 [2024-02-14 19:21:47.533528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.932 [2024-02-14 19:21:47.533544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.932 [2024-02-14 19:21:47.533559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.932 [2024-02-14 19:21:47.533574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.932 [2024-02-14 19:21:47.533588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.932 [2024-02-14 19:21:47.533604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.932 [2024-02-14 19:21:47.533619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.932 [2024-02-14 19:21:47.533634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.932 [2024-02-14 19:21:47.533649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.932 [2024-02-14 19:21:47.533665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.932 [2024-02-14 19:21:47.533680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.932 [2024-02-14 19:21:47.533701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.932 [2024-02-14 19:21:47.533717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.932 [2024-02-14 19:21:47.533732] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6a570 is same with the state(5) to be set 00:21:16.932 [2024-02-14 19:21:47.533751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:16.932 [2024-02-14 19:21:47.533763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:16.932 [2024-02-14 19:21:47.533780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78840 len:8 PRP1 0x0 PRP2 0x0 00:21:16.932 [2024-02-14 19:21:47.533794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.932 [2024-02-14 19:21:47.533864] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe6a570 was disconnected and freed. reset controller. 00:21:16.932 [2024-02-14 19:21:47.533883] bdev_nvme.c:1829:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:16.932 [2024-02-14 19:21:47.533945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.932 [2024-02-14 19:21:47.533966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.932 [2024-02-14 19:21:47.533983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.932 [2024-02-14 19:21:47.534007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.932 [2024-02-14 19:21:47.534024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.932 [2024-02-14 19:21:47.534038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.932 [2024-02-14 19:21:47.534053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.932 [2024-02-14 19:21:47.534067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.932 [2024-02-14 19:21:47.534081] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:16.932 [2024-02-14 19:21:47.534118] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd73170 (9): Bad file descriptor 00:21:16.932 [2024-02-14 19:21:47.536066] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:16.932 [2024-02-14 19:21:47.558266] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:16.932 00:21:16.932 Latency(us) 00:21:16.932 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.932 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:16.932 Verification LBA range: start 0x0 length 0x4000 00:21:16.932 NVMe0n1 : 15.01 15054.08 58.80 262.09 0.00 8342.55 573.44 26333.56 00:21:16.932 =================================================================================================================== 00:21:16.932 Total : 15054.08 58.80 262.09 0.00 8342.55 573.44 26333.56 00:21:16.932 Received shutdown signal, test time was about 15.000000 seconds 00:21:16.932 00:21:16.932 Latency(us) 00:21:16.932 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.932 =================================================================================================================== 00:21:16.932 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:16.932 19:21:53 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:16.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:16.932 19:21:53 -- host/failover.sh@65 -- # count=3 00:21:16.932 19:21:53 -- host/failover.sh@67 -- # (( count != 3 )) 00:21:16.932 19:21:53 -- host/failover.sh@73 -- # bdevperf_pid=83408 00:21:16.932 19:21:53 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:16.932 19:21:53 -- host/failover.sh@75 -- # waitforlisten 83408 /var/tmp/bdevperf.sock 00:21:16.932 19:21:53 -- common/autotest_common.sh@817 -- # '[' -z 83408 ']' 00:21:16.932 19:21:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:16.932 19:21:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:16.932 19:21:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:16.932 19:21:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:16.932 19:21:53 -- common/autotest_common.sh@10 -- # set +x 00:21:17.500 19:21:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:17.500 19:21:54 -- common/autotest_common.sh@850 -- # return 0 00:21:17.500 19:21:54 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:17.500 [2024-02-14 19:21:54.893795] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:17.500 19:21:54 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:17.759 [2024-02-14 19:21:55.097821] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:17.759 19:21:55 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:18.018 NVMe0n1 00:21:18.018 19:21:55 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:18.586 00:21:18.586 19:21:55 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:18.586 00:21:18.586 19:21:55 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:18.586 19:21:55 -- host/failover.sh@82 -- # grep -q NVMe0 00:21:18.844 19:21:56 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:19.103 19:21:56 -- host/failover.sh@87 -- # sleep 3 00:21:22.393 19:21:59 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:22.393 19:21:59 -- host/failover.sh@88 -- # grep -q NVMe0 00:21:22.393 19:21:59 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:22.393 19:21:59 -- host/failover.sh@90 -- # run_test_pid=83545 00:21:22.393 19:21:59 -- host/failover.sh@92 -- # wait 83545 00:21:23.330 0 00:21:23.330 19:22:00 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:23.330 [2024-02-14 19:21:53.762114] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:21:23.330 [2024-02-14 19:21:53.762300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83408 ] 00:21:23.330 [2024-02-14 19:21:53.891974] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.330 [2024-02-14 19:21:53.988432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.330 [2024-02-14 19:21:56.346860] bdev_nvme.c:1829:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:23.330 [2024-02-14 19:21:56.346975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.330 [2024-02-14 19:21:56.347007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.330 [2024-02-14 19:21:56.347027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.330 [2024-02-14 19:21:56.347043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.330 [2024-02-14 19:21:56.347058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.330 [2024-02-14 19:21:56.347075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.330 [2024-02-14 19:21:56.347118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.330 [2024-02-14 19:21:56.347133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.330 [2024-02-14 19:21:56.347159] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.330 [2024-02-14 19:21:56.347205] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.330 [2024-02-14 19:21:56.347251] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x156e170 (9): Bad file descriptor 00:21:23.330 [2024-02-14 19:21:56.358203] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:23.330 Running I/O for 1 seconds... 00:21:23.330 00:21:23.330 Latency(us) 00:21:23.330 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.330 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:23.330 Verification LBA range: start 0x0 length 0x4000 00:21:23.330 NVMe0n1 : 1.01 15016.44 58.66 0.00 0.00 8492.13 793.13 9830.40 00:21:23.330 =================================================================================================================== 00:21:23.330 Total : 15016.44 58.66 0.00 0.00 8492.13 793.13 9830.40 00:21:23.330 19:22:00 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:23.330 19:22:00 -- host/failover.sh@95 -- # grep -q NVMe0 00:21:23.589 19:22:00 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:23.847 19:22:01 -- host/failover.sh@99 -- # grep -q NVMe0 00:21:23.847 19:22:01 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:24.106 19:22:01 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:24.365 19:22:01 -- host/failover.sh@101 -- # sleep 3 00:21:27.653 19:22:04 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:27.653 19:22:04 -- host/failover.sh@103 -- # grep -q NVMe0 00:21:27.653 19:22:04 -- host/failover.sh@108 -- # killprocess 83408 00:21:27.653 19:22:04 -- common/autotest_common.sh@924 -- # '[' -z 83408 ']' 00:21:27.653 19:22:04 -- common/autotest_common.sh@928 -- # kill -0 83408 00:21:27.653 19:22:04 -- common/autotest_common.sh@929 -- # uname 00:21:27.653 19:22:04 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:21:27.653 19:22:04 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 83408 00:21:27.653 19:22:04 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:21:27.653 19:22:04 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:21:27.653 killing process with pid 83408 00:21:27.653 19:22:04 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 83408' 00:21:27.653 19:22:04 -- common/autotest_common.sh@943 -- # kill 83408 00:21:27.653 19:22:04 -- common/autotest_common.sh@948 -- # wait 83408 00:21:27.911 19:22:05 -- host/failover.sh@110 -- # sync 00:21:27.911 19:22:05 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:28.170 19:22:05 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:28.170 19:22:05 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:28.170 19:22:05 -- host/failover.sh@116 -- # nvmftestfini 00:21:28.170 19:22:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:28.170 19:22:05 -- nvmf/common.sh@116 -- # sync 00:21:28.170 19:22:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:28.170 19:22:05 -- nvmf/common.sh@119 -- # set +e 00:21:28.170 19:22:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:28.170 19:22:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:28.170 rmmod nvme_tcp 00:21:28.170 rmmod nvme_fabrics 00:21:28.170 rmmod nvme_keyring 00:21:28.170 19:22:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:28.170 19:22:05 -- nvmf/common.sh@123 -- # set -e 00:21:28.170 19:22:05 -- nvmf/common.sh@124 -- # return 0 00:21:28.170 19:22:05 -- nvmf/common.sh@477 -- # '[' -n 83050 ']' 00:21:28.170 19:22:05 -- nvmf/common.sh@478 -- # killprocess 83050 00:21:28.170 19:22:05 -- common/autotest_common.sh@924 -- # '[' -z 83050 ']' 00:21:28.170 19:22:05 -- common/autotest_common.sh@928 -- # kill -0 83050 00:21:28.170 19:22:05 -- common/autotest_common.sh@929 -- # uname 00:21:28.170 19:22:05 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:21:28.170 19:22:05 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 83050 00:21:28.170 19:22:05 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:21:28.170 19:22:05 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:21:28.170 19:22:05 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 83050' 00:21:28.170 killing process with pid 83050 00:21:28.170 19:22:05 -- common/autotest_common.sh@943 -- # kill 83050 00:21:28.170 19:22:05 -- common/autotest_common.sh@948 -- # wait 83050 00:21:28.429 19:22:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:28.429 19:22:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:28.429 19:22:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:28.429 19:22:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:28.429 19:22:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:28.429 19:22:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.429 19:22:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:28.429 19:22:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.688 19:22:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:28.688 00:21:28.688 real 0m32.207s 00:21:28.688 user 2m3.775s 00:21:28.688 sys 0m5.229s 00:21:28.688 19:22:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:28.688 19:22:05 -- common/autotest_common.sh@10 -- # set +x 00:21:28.688 ************************************ 00:21:28.688 END TEST nvmf_failover 00:21:28.688 ************************************ 00:21:28.688 19:22:05 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:28.688 19:22:05 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:21:28.688 19:22:05 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:21:28.688 19:22:05 -- common/autotest_common.sh@10 -- # set +x 00:21:28.688 ************************************ 00:21:28.688 START TEST nvmf_discovery 00:21:28.688 ************************************ 00:21:28.688 19:22:05 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:28.688 * Looking for test storage... 00:21:28.688 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:28.688 19:22:05 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:28.688 19:22:05 -- nvmf/common.sh@7 -- # uname -s 00:21:28.688 19:22:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:28.688 19:22:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:28.688 19:22:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:28.688 19:22:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:28.688 19:22:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:28.688 19:22:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:28.688 19:22:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:28.688 19:22:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:28.688 19:22:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:28.688 19:22:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:28.688 19:22:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:21:28.688 19:22:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:21:28.688 19:22:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:28.688 19:22:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:28.688 19:22:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:28.688 19:22:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:28.688 19:22:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:28.688 19:22:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:28.688 19:22:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:28.688 19:22:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.688 19:22:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.688 19:22:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.688 19:22:06 -- paths/export.sh@5 -- # export PATH 00:21:28.688 19:22:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.688 19:22:06 -- nvmf/common.sh@46 -- # : 0 00:21:28.688 19:22:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:28.688 19:22:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:28.688 19:22:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:28.688 19:22:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:28.688 19:22:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:28.688 19:22:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:28.688 19:22:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:28.688 19:22:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:28.688 19:22:06 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:28.688 19:22:06 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:28.688 19:22:06 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:28.688 19:22:06 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:28.688 19:22:06 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:28.688 19:22:06 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:28.688 19:22:06 -- host/discovery.sh@25 -- # nvmftestinit 00:21:28.688 19:22:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:28.688 19:22:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:28.688 19:22:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:28.688 19:22:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:28.688 19:22:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:28.688 19:22:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.688 19:22:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:28.688 19:22:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.688 19:22:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:28.688 19:22:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:28.688 19:22:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:28.688 19:22:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:28.688 19:22:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:28.688 19:22:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:28.688 19:22:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:28.688 19:22:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:28.688 19:22:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:28.688 19:22:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:28.688 19:22:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:28.688 19:22:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:28.688 19:22:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:28.688 19:22:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:28.688 19:22:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:28.688 19:22:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:28.688 19:22:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:28.688 19:22:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:28.688 19:22:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:28.688 19:22:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:28.688 Cannot find device "nvmf_tgt_br" 00:21:28.688 19:22:06 -- nvmf/common.sh@154 -- # true 00:21:28.688 19:22:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:28.688 Cannot find device "nvmf_tgt_br2" 00:21:28.688 19:22:06 -- nvmf/common.sh@155 -- # true 00:21:28.688 19:22:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:28.688 19:22:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:28.688 Cannot find device "nvmf_tgt_br" 00:21:28.688 19:22:06 -- nvmf/common.sh@157 -- # true 00:21:28.688 19:22:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:28.688 Cannot find device "nvmf_tgt_br2" 00:21:28.689 19:22:06 -- nvmf/common.sh@158 -- # true 00:21:28.689 19:22:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:28.947 19:22:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:28.947 19:22:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:28.947 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:28.947 19:22:06 -- nvmf/common.sh@161 -- # true 00:21:28.947 19:22:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:28.947 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:28.947 19:22:06 -- nvmf/common.sh@162 -- # true 00:21:28.947 19:22:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:28.947 19:22:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:28.947 19:22:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:28.947 19:22:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:28.947 19:22:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:28.947 19:22:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:28.947 19:22:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:28.947 19:22:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:28.947 19:22:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:28.947 19:22:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:28.947 19:22:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:28.947 19:22:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:28.947 19:22:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:28.947 19:22:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:28.947 19:22:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:28.947 19:22:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:28.947 19:22:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:28.947 19:22:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:28.947 19:22:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:28.947 19:22:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:28.947 19:22:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:28.947 19:22:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:28.947 19:22:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:28.947 19:22:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:28.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:28.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:21:28.947 00:21:28.947 --- 10.0.0.2 ping statistics --- 00:21:28.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.947 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:21:28.947 19:22:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:28.947 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:28.947 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:21:28.947 00:21:28.947 --- 10.0.0.3 ping statistics --- 00:21:28.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.947 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:21:28.947 19:22:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:28.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:28.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:21:28.947 00:21:28.947 --- 10.0.0.1 ping statistics --- 00:21:28.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.947 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:21:28.947 19:22:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:28.947 19:22:06 -- nvmf/common.sh@421 -- # return 0 00:21:28.948 19:22:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:28.948 19:22:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:28.948 19:22:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:28.948 19:22:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:28.948 19:22:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:28.948 19:22:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:28.948 19:22:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:28.948 19:22:06 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:28.948 19:22:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:28.948 19:22:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:28.948 19:22:06 -- common/autotest_common.sh@10 -- # set +x 00:21:28.948 19:22:06 -- nvmf/common.sh@469 -- # nvmfpid=83853 00:21:28.948 19:22:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:28.948 19:22:06 -- nvmf/common.sh@470 -- # waitforlisten 83853 00:21:28.948 19:22:06 -- common/autotest_common.sh@817 -- # '[' -z 83853 ']' 00:21:28.948 19:22:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.948 19:22:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:28.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.948 19:22:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.948 19:22:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:28.948 19:22:06 -- common/autotest_common.sh@10 -- # set +x 00:21:29.206 [2024-02-14 19:22:06.399008] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:21:29.206 [2024-02-14 19:22:06.399086] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.206 [2024-02-14 19:22:06.528200] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.206 [2024-02-14 19:22:06.622101] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:29.206 [2024-02-14 19:22:06.622247] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.206 [2024-02-14 19:22:06.622260] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.206 [2024-02-14 19:22:06.622268] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.206 [2024-02-14 19:22:06.622299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.141 19:22:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:30.141 19:22:07 -- common/autotest_common.sh@850 -- # return 0 00:21:30.141 19:22:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:30.141 19:22:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:30.142 19:22:07 -- common/autotest_common.sh@10 -- # set +x 00:21:30.142 19:22:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.142 19:22:07 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:30.142 19:22:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.142 19:22:07 -- common/autotest_common.sh@10 -- # set +x 00:21:30.142 [2024-02-14 19:22:07.357143] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.142 19:22:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.142 19:22:07 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:30.142 19:22:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.142 19:22:07 -- common/autotest_common.sh@10 -- # set +x 00:21:30.142 [2024-02-14 19:22:07.365263] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:30.142 19:22:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.142 19:22:07 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:30.142 19:22:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.142 19:22:07 -- common/autotest_common.sh@10 -- # set +x 00:21:30.142 null0 00:21:30.142 19:22:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.142 19:22:07 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:30.142 19:22:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.142 19:22:07 -- common/autotest_common.sh@10 -- # set +x 00:21:30.142 null1 00:21:30.142 19:22:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.142 19:22:07 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:30.142 19:22:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.142 19:22:07 -- common/autotest_common.sh@10 -- # set +x 00:21:30.142 19:22:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.142 19:22:07 -- host/discovery.sh@45 -- # hostpid=83903 00:21:30.142 19:22:07 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:30.142 19:22:07 -- host/discovery.sh@46 -- # waitforlisten 83903 /tmp/host.sock 00:21:30.142 19:22:07 -- common/autotest_common.sh@817 -- # '[' -z 83903 ']' 00:21:30.142 19:22:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:21:30.142 19:22:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:30.142 19:22:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:30.142 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:30.142 19:22:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:30.142 19:22:07 -- common/autotest_common.sh@10 -- # set +x 00:21:30.142 [2024-02-14 19:22:07.442138] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:21:30.142 [2024-02-14 19:22:07.442218] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83903 ] 00:21:30.401 [2024-02-14 19:22:07.566271] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.401 [2024-02-14 19:22:07.652244] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:30.401 [2024-02-14 19:22:07.652425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.337 19:22:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:31.337 19:22:08 -- common/autotest_common.sh@850 -- # return 0 00:21:31.337 19:22:08 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:31.337 19:22:08 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:31.337 19:22:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.337 19:22:08 -- common/autotest_common.sh@10 -- # set +x 00:21:31.337 19:22:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.337 19:22:08 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:31.337 19:22:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.337 19:22:08 -- common/autotest_common.sh@10 -- # set +x 00:21:31.337 19:22:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.337 19:22:08 -- host/discovery.sh@72 -- # notify_id=0 00:21:31.337 19:22:08 -- host/discovery.sh@78 -- # get_subsystem_names 00:21:31.338 19:22:08 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:31.338 19:22:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.338 19:22:08 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:31.338 19:22:08 -- common/autotest_common.sh@10 -- # set +x 00:21:31.338 19:22:08 -- host/discovery.sh@59 -- # xargs 00:21:31.338 19:22:08 -- host/discovery.sh@59 -- # sort 00:21:31.338 19:22:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.338 19:22:08 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:21:31.338 19:22:08 -- host/discovery.sh@79 -- # get_bdev_list 00:21:31.338 19:22:08 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:31.338 19:22:08 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:31.338 19:22:08 -- host/discovery.sh@55 -- # sort 00:21:31.338 19:22:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.338 19:22:08 -- common/autotest_common.sh@10 -- # set +x 00:21:31.338 19:22:08 -- host/discovery.sh@55 -- # xargs 00:21:31.338 19:22:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.338 19:22:08 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:21:31.338 19:22:08 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:31.338 19:22:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.338 19:22:08 -- common/autotest_common.sh@10 -- # set +x 00:21:31.338 19:22:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.338 19:22:08 -- host/discovery.sh@82 -- # get_subsystem_names 00:21:31.338 19:22:08 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:31.338 19:22:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.338 19:22:08 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:31.338 19:22:08 -- common/autotest_common.sh@10 -- # set +x 00:21:31.338 19:22:08 -- host/discovery.sh@59 -- # xargs 00:21:31.338 19:22:08 -- host/discovery.sh@59 -- # sort 00:21:31.338 19:22:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.338 19:22:08 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:21:31.338 19:22:08 -- host/discovery.sh@83 -- # get_bdev_list 00:21:31.338 19:22:08 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:31.338 19:22:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.338 19:22:08 -- common/autotest_common.sh@10 -- # set +x 00:21:31.338 19:22:08 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:31.338 19:22:08 -- host/discovery.sh@55 -- # sort 00:21:31.338 19:22:08 -- host/discovery.sh@55 -- # xargs 00:21:31.338 19:22:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.338 19:22:08 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:31.338 19:22:08 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:31.338 19:22:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.338 19:22:08 -- common/autotest_common.sh@10 -- # set +x 00:21:31.338 19:22:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.338 19:22:08 -- host/discovery.sh@86 -- # get_subsystem_names 00:21:31.338 19:22:08 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:31.338 19:22:08 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:31.338 19:22:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.338 19:22:08 -- host/discovery.sh@59 -- # sort 00:21:31.338 19:22:08 -- common/autotest_common.sh@10 -- # set +x 00:21:31.338 19:22:08 -- host/discovery.sh@59 -- # xargs 00:21:31.338 19:22:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.338 19:22:08 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:21:31.338 19:22:08 -- host/discovery.sh@87 -- # get_bdev_list 00:21:31.338 19:22:08 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:31.338 19:22:08 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:31.338 19:22:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.338 19:22:08 -- common/autotest_common.sh@10 -- # set +x 00:21:31.338 19:22:08 -- host/discovery.sh@55 -- # sort 00:21:31.338 19:22:08 -- host/discovery.sh@55 -- # xargs 00:21:31.338 19:22:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.597 19:22:08 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:31.597 19:22:08 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:31.597 19:22:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.597 19:22:08 -- common/autotest_common.sh@10 -- # set +x 00:21:31.597 [2024-02-14 19:22:08.781592] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.597 19:22:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.597 19:22:08 -- host/discovery.sh@92 -- # get_subsystem_names 00:21:31.597 19:22:08 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:31.597 19:22:08 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:31.597 19:22:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.597 19:22:08 -- host/discovery.sh@59 -- # sort 00:21:31.597 19:22:08 -- common/autotest_common.sh@10 -- # set +x 00:21:31.597 19:22:08 -- host/discovery.sh@59 -- # xargs 00:21:31.597 19:22:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.597 19:22:08 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:31.597 19:22:08 -- host/discovery.sh@93 -- # get_bdev_list 00:21:31.597 19:22:08 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:31.597 19:22:08 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:31.597 19:22:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.597 19:22:08 -- common/autotest_common.sh@10 -- # set +x 00:21:31.597 19:22:08 -- host/discovery.sh@55 -- # sort 00:21:31.597 19:22:08 -- host/discovery.sh@55 -- # xargs 00:21:31.597 19:22:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.597 19:22:08 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:21:31.597 19:22:08 -- host/discovery.sh@94 -- # get_notification_count 00:21:31.597 19:22:08 -- host/discovery.sh@74 -- # jq '. | length' 00:21:31.597 19:22:08 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:31.597 19:22:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.597 19:22:08 -- common/autotest_common.sh@10 -- # set +x 00:21:31.597 19:22:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.597 19:22:08 -- host/discovery.sh@74 -- # notification_count=0 00:21:31.597 19:22:08 -- host/discovery.sh@75 -- # notify_id=0 00:21:31.597 19:22:08 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:21:31.597 19:22:08 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:31.597 19:22:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.597 19:22:08 -- common/autotest_common.sh@10 -- # set +x 00:21:31.597 19:22:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.597 19:22:08 -- host/discovery.sh@100 -- # sleep 1 00:21:32.164 [2024-02-14 19:22:09.430046] bdev_nvme.c:6704:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:32.164 [2024-02-14 19:22:09.430075] bdev_nvme.c:6784:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:32.164 [2024-02-14 19:22:09.430095] bdev_nvme.c:6667:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:32.165 [2024-02-14 19:22:09.516173] bdev_nvme.c:6633:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:32.165 [2024-02-14 19:22:09.572125] bdev_nvme.c:6523:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:32.165 [2024-02-14 19:22:09.572154] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:32.732 19:22:09 -- host/discovery.sh@101 -- # get_subsystem_names 00:21:32.732 19:22:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:32.732 19:22:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.732 19:22:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:32.732 19:22:09 -- common/autotest_common.sh@10 -- # set +x 00:21:32.732 19:22:09 -- host/discovery.sh@59 -- # sort 00:21:32.732 19:22:09 -- host/discovery.sh@59 -- # xargs 00:21:32.732 19:22:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.732 19:22:10 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.732 19:22:10 -- host/discovery.sh@102 -- # get_bdev_list 00:21:32.732 19:22:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:32.732 19:22:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:32.732 19:22:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.732 19:22:10 -- common/autotest_common.sh@10 -- # set +x 00:21:32.732 19:22:10 -- host/discovery.sh@55 -- # sort 00:21:32.732 19:22:10 -- host/discovery.sh@55 -- # xargs 00:21:32.732 19:22:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.732 19:22:10 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:32.732 19:22:10 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:21:32.732 19:22:10 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:32.732 19:22:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.732 19:22:10 -- common/autotest_common.sh@10 -- # set +x 00:21:32.732 19:22:10 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:32.732 19:22:10 -- host/discovery.sh@63 -- # sort -n 00:21:32.732 19:22:10 -- host/discovery.sh@63 -- # xargs 00:21:32.732 19:22:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.732 19:22:10 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:21:32.732 19:22:10 -- host/discovery.sh@104 -- # get_notification_count 00:21:32.732 19:22:10 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:32.732 19:22:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.732 19:22:10 -- host/discovery.sh@74 -- # jq '. | length' 00:21:32.732 19:22:10 -- common/autotest_common.sh@10 -- # set +x 00:21:32.732 19:22:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.992 19:22:10 -- host/discovery.sh@74 -- # notification_count=1 00:21:32.992 19:22:10 -- host/discovery.sh@75 -- # notify_id=1 00:21:32.992 19:22:10 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:21:32.992 19:22:10 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:32.992 19:22:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.992 19:22:10 -- common/autotest_common.sh@10 -- # set +x 00:21:32.992 19:22:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.992 19:22:10 -- host/discovery.sh@109 -- # sleep 1 00:21:33.966 19:22:11 -- host/discovery.sh@110 -- # get_bdev_list 00:21:33.966 19:22:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:33.966 19:22:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:33.966 19:22:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.966 19:22:11 -- common/autotest_common.sh@10 -- # set +x 00:21:33.966 19:22:11 -- host/discovery.sh@55 -- # sort 00:21:33.966 19:22:11 -- host/discovery.sh@55 -- # xargs 00:21:33.966 19:22:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.966 19:22:11 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:33.966 19:22:11 -- host/discovery.sh@111 -- # get_notification_count 00:21:33.966 19:22:11 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:33.966 19:22:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.966 19:22:11 -- common/autotest_common.sh@10 -- # set +x 00:21:33.966 19:22:11 -- host/discovery.sh@74 -- # jq '. | length' 00:21:33.966 19:22:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.966 19:22:11 -- host/discovery.sh@74 -- # notification_count=1 00:21:33.966 19:22:11 -- host/discovery.sh@75 -- # notify_id=2 00:21:33.966 19:22:11 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:21:33.966 19:22:11 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:33.966 19:22:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.966 19:22:11 -- common/autotest_common.sh@10 -- # set +x 00:21:33.966 [2024-02-14 19:22:11.290478] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:33.966 [2024-02-14 19:22:11.291529] bdev_nvme.c:6686:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:33.966 [2024-02-14 19:22:11.291572] bdev_nvme.c:6667:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:33.966 19:22:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.966 19:22:11 -- host/discovery.sh@117 -- # sleep 1 00:21:33.966 [2024-02-14 19:22:11.377624] bdev_nvme.c:6628:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:34.225 [2024-02-14 19:22:11.434840] bdev_nvme.c:6523:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:34.225 [2024-02-14 19:22:11.434866] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:34.225 [2024-02-14 19:22:11.434873] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:35.162 19:22:12 -- host/discovery.sh@118 -- # get_subsystem_names 00:21:35.162 19:22:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:35.162 19:22:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.162 19:22:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:35.162 19:22:12 -- common/autotest_common.sh@10 -- # set +x 00:21:35.162 19:22:12 -- host/discovery.sh@59 -- # sort 00:21:35.162 19:22:12 -- host/discovery.sh@59 -- # xargs 00:21:35.162 19:22:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.162 19:22:12 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.162 19:22:12 -- host/discovery.sh@119 -- # get_bdev_list 00:21:35.162 19:22:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:35.162 19:22:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:35.162 19:22:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.162 19:22:12 -- host/discovery.sh@55 -- # sort 00:21:35.162 19:22:12 -- common/autotest_common.sh@10 -- # set +x 00:21:35.162 19:22:12 -- host/discovery.sh@55 -- # xargs 00:21:35.162 19:22:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.162 19:22:12 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:35.162 19:22:12 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:21:35.162 19:22:12 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:35.162 19:22:12 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:35.162 19:22:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.162 19:22:12 -- host/discovery.sh@63 -- # xargs 00:21:35.162 19:22:12 -- host/discovery.sh@63 -- # sort -n 00:21:35.162 19:22:12 -- common/autotest_common.sh@10 -- # set +x 00:21:35.162 19:22:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.162 19:22:12 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:35.162 19:22:12 -- host/discovery.sh@121 -- # get_notification_count 00:21:35.162 19:22:12 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:35.162 19:22:12 -- host/discovery.sh@74 -- # jq '. | length' 00:21:35.162 19:22:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.162 19:22:12 -- common/autotest_common.sh@10 -- # set +x 00:21:35.162 19:22:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.162 19:22:12 -- host/discovery.sh@74 -- # notification_count=0 00:21:35.162 19:22:12 -- host/discovery.sh@75 -- # notify_id=2 00:21:35.162 19:22:12 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:21:35.162 19:22:12 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:35.162 19:22:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.162 19:22:12 -- common/autotest_common.sh@10 -- # set +x 00:21:35.162 [2024-02-14 19:22:12.519664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:35.162 [2024-02-14 19:22:12.519703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.162 [2024-02-14 19:22:12.519719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:35.162 [2024-02-14 19:22:12.519728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.162 [2024-02-14 19:22:12.519739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:35.162 [2024-02-14 19:22:12.519749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.162 [2024-02-14 19:22:12.519759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:35.162 [2024-02-14 19:22:12.519768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.162 [2024-02-14 19:22:12.519777] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62ae40 is same with the state(5) to be set 00:21:35.162 [2024-02-14 19:22:12.519837] bdev_nvme.c:6686:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:35.162 [2024-02-14 19:22:12.519856] bdev_nvme.c:6667:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:35.162 19:22:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.162 19:22:12 -- host/discovery.sh@127 -- # sleep 1 00:21:35.162 [2024-02-14 19:22:12.529623] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x62ae40 (9): Bad file descriptor 00:21:35.162 [2024-02-14 19:22:12.539639] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:35.162 [2024-02-14 19:22:12.539735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.162 [2024-02-14 19:22:12.539785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.162 [2024-02-14 19:22:12.539803] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x62ae40 with addr=10.0.0.2, port=4420 00:21:35.162 [2024-02-14 19:22:12.539814] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62ae40 is same with the state(5) to be set 00:21:35.162 [2024-02-14 19:22:12.539830] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x62ae40 (9): Bad file descriptor 00:21:35.162 [2024-02-14 19:22:12.539856] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:35.162 [2024-02-14 19:22:12.539868] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:35.162 [2024-02-14 19:22:12.539878] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:35.162 [2024-02-14 19:22:12.539894] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.162 [2024-02-14 19:22:12.549695] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:35.162 [2024-02-14 19:22:12.549777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.162 [2024-02-14 19:22:12.549827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.162 [2024-02-14 19:22:12.549846] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x62ae40 with addr=10.0.0.2, port=4420 00:21:35.162 [2024-02-14 19:22:12.549858] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62ae40 is same with the state(5) to be set 00:21:35.162 [2024-02-14 19:22:12.549875] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x62ae40 (9): Bad file descriptor 00:21:35.162 [2024-02-14 19:22:12.549916] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:35.162 [2024-02-14 19:22:12.549928] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:35.162 [2024-02-14 19:22:12.549937] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:35.162 [2024-02-14 19:22:12.549951] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.162 [2024-02-14 19:22:12.559745] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:35.162 [2024-02-14 19:22:12.559824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.162 [2024-02-14 19:22:12.559872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.162 [2024-02-14 19:22:12.559890] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x62ae40 with addr=10.0.0.2, port=4420 00:21:35.162 [2024-02-14 19:22:12.559901] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62ae40 is same with the state(5) to be set 00:21:35.162 [2024-02-14 19:22:12.559917] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x62ae40 (9): Bad file descriptor 00:21:35.162 [2024-02-14 19:22:12.559942] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:35.162 [2024-02-14 19:22:12.559953] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:35.162 [2024-02-14 19:22:12.559961] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:35.162 [2024-02-14 19:22:12.559976] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.162 [2024-02-14 19:22:12.569796] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:35.162 [2024-02-14 19:22:12.569882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.162 [2024-02-14 19:22:12.569945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.162 [2024-02-14 19:22:12.569963] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x62ae40 with addr=10.0.0.2, port=4420 00:21:35.162 [2024-02-14 19:22:12.569974] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62ae40 is same with the state(5) to be set 00:21:35.162 [2024-02-14 19:22:12.569990] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x62ae40 (9): Bad file descriptor 00:21:35.162 [2024-02-14 19:22:12.570016] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:35.162 [2024-02-14 19:22:12.570027] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:35.162 [2024-02-14 19:22:12.570036] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:35.162 [2024-02-14 19:22:12.570050] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.421 [2024-02-14 19:22:12.579849] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:35.422 [2024-02-14 19:22:12.579928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.422 [2024-02-14 19:22:12.579975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.422 [2024-02-14 19:22:12.579994] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x62ae40 with addr=10.0.0.2, port=4420 00:21:35.422 [2024-02-14 19:22:12.580005] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62ae40 is same with the state(5) to be set 00:21:35.422 [2024-02-14 19:22:12.580021] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x62ae40 (9): Bad file descriptor 00:21:35.422 [2024-02-14 19:22:12.580046] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:35.422 [2024-02-14 19:22:12.580057] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:35.422 [2024-02-14 19:22:12.580065] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:35.422 [2024-02-14 19:22:12.580079] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.422 [2024-02-14 19:22:12.589913] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:35.422 [2024-02-14 19:22:12.589991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.422 [2024-02-14 19:22:12.590038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.422 [2024-02-14 19:22:12.590055] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x62ae40 with addr=10.0.0.2, port=4420 00:21:35.422 [2024-02-14 19:22:12.590067] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62ae40 is same with the state(5) to be set 00:21:35.422 [2024-02-14 19:22:12.590083] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x62ae40 (9): Bad file descriptor 00:21:35.422 [2024-02-14 19:22:12.590108] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:35.422 [2024-02-14 19:22:12.590119] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:35.422 [2024-02-14 19:22:12.590128] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:35.422 [2024-02-14 19:22:12.590143] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.422 [2024-02-14 19:22:12.599961] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:35.422 [2024-02-14 19:22:12.600038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.422 [2024-02-14 19:22:12.600086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.422 [2024-02-14 19:22:12.600103] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x62ae40 with addr=10.0.0.2, port=4420 00:21:35.422 [2024-02-14 19:22:12.600115] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62ae40 is same with the state(5) to be set 00:21:35.422 [2024-02-14 19:22:12.600131] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x62ae40 (9): Bad file descriptor 00:21:35.422 [2024-02-14 19:22:12.600155] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:35.422 [2024-02-14 19:22:12.600167] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:35.422 [2024-02-14 19:22:12.600175] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:35.422 [2024-02-14 19:22:12.600190] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.422 [2024-02-14 19:22:12.606155] bdev_nvme.c:6491:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:35.422 [2024-02-14 19:22:12.606184] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:36.359 19:22:13 -- host/discovery.sh@128 -- # get_subsystem_names 00:21:36.359 19:22:13 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:36.359 19:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.359 19:22:13 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:36.359 19:22:13 -- common/autotest_common.sh@10 -- # set +x 00:21:36.359 19:22:13 -- host/discovery.sh@59 -- # sort 00:21:36.359 19:22:13 -- host/discovery.sh@59 -- # xargs 00:21:36.359 19:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.359 19:22:13 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.359 19:22:13 -- host/discovery.sh@129 -- # get_bdev_list 00:21:36.359 19:22:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:36.359 19:22:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:36.359 19:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.359 19:22:13 -- host/discovery.sh@55 -- # sort 00:21:36.359 19:22:13 -- common/autotest_common.sh@10 -- # set +x 00:21:36.359 19:22:13 -- host/discovery.sh@55 -- # xargs 00:21:36.359 19:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.359 19:22:13 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:36.359 19:22:13 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:21:36.359 19:22:13 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:36.359 19:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.359 19:22:13 -- common/autotest_common.sh@10 -- # set +x 00:21:36.359 19:22:13 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:36.359 19:22:13 -- host/discovery.sh@63 -- # sort -n 00:21:36.359 19:22:13 -- host/discovery.sh@63 -- # xargs 00:21:36.359 19:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.359 19:22:13 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:21:36.359 19:22:13 -- host/discovery.sh@131 -- # get_notification_count 00:21:36.359 19:22:13 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:36.359 19:22:13 -- host/discovery.sh@74 -- # jq '. | length' 00:21:36.359 19:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.359 19:22:13 -- common/autotest_common.sh@10 -- # set +x 00:21:36.359 19:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.359 19:22:13 -- host/discovery.sh@74 -- # notification_count=0 00:21:36.359 19:22:13 -- host/discovery.sh@75 -- # notify_id=2 00:21:36.359 19:22:13 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:21:36.359 19:22:13 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:36.359 19:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.359 19:22:13 -- common/autotest_common.sh@10 -- # set +x 00:21:36.359 19:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.359 19:22:13 -- host/discovery.sh@135 -- # sleep 1 00:21:37.735 19:22:14 -- host/discovery.sh@136 -- # get_subsystem_names 00:21:37.736 19:22:14 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:37.736 19:22:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.736 19:22:14 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:37.736 19:22:14 -- common/autotest_common.sh@10 -- # set +x 00:21:37.736 19:22:14 -- host/discovery.sh@59 -- # sort 00:21:37.736 19:22:14 -- host/discovery.sh@59 -- # xargs 00:21:37.736 19:22:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.736 19:22:14 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:21:37.736 19:22:14 -- host/discovery.sh@137 -- # get_bdev_list 00:21:37.736 19:22:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:37.736 19:22:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:37.736 19:22:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.736 19:22:14 -- host/discovery.sh@55 -- # xargs 00:21:37.736 19:22:14 -- common/autotest_common.sh@10 -- # set +x 00:21:37.736 19:22:14 -- host/discovery.sh@55 -- # sort 00:21:37.736 19:22:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.736 19:22:14 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:21:37.736 19:22:14 -- host/discovery.sh@138 -- # get_notification_count 00:21:37.736 19:22:14 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:37.736 19:22:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.736 19:22:14 -- common/autotest_common.sh@10 -- # set +x 00:21:37.736 19:22:14 -- host/discovery.sh@74 -- # jq '. | length' 00:21:37.736 19:22:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.736 19:22:14 -- host/discovery.sh@74 -- # notification_count=2 00:21:37.736 19:22:14 -- host/discovery.sh@75 -- # notify_id=4 00:21:37.736 19:22:14 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:21:37.736 19:22:14 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:37.736 19:22:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.736 19:22:14 -- common/autotest_common.sh@10 -- # set +x 00:21:38.669 [2024-02-14 19:22:15.930837] bdev_nvme.c:6704:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:38.669 [2024-02-14 19:22:15.931039] bdev_nvme.c:6784:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:38.669 [2024-02-14 19:22:15.931071] bdev_nvme.c:6667:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:38.669 [2024-02-14 19:22:16.016946] bdev_nvme.c:6633:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:38.669 [2024-02-14 19:22:16.075595] bdev_nvme.c:6523:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:38.669 [2024-02-14 19:22:16.075629] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:38.669 19:22:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.669 19:22:16 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:38.669 19:22:16 -- common/autotest_common.sh@638 -- # local es=0 00:21:38.669 19:22:16 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:38.669 19:22:16 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:38.669 19:22:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:38.669 19:22:16 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:38.669 19:22:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:38.669 19:22:16 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:38.669 19:22:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.669 19:22:16 -- common/autotest_common.sh@10 -- # set +x 00:21:38.928 2024/02/14 19:22:16 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:21:38.928 request: 00:21:38.928 { 00:21:38.928 "method": "bdev_nvme_start_discovery", 00:21:38.928 "params": { 00:21:38.928 "name": "nvme", 00:21:38.928 "trtype": "tcp", 00:21:38.928 "traddr": "10.0.0.2", 00:21:38.928 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:38.928 "adrfam": "ipv4", 00:21:38.928 "trsvcid": "8009", 00:21:38.928 "wait_for_attach": true 00:21:38.928 } 00:21:38.928 } 00:21:38.928 Got JSON-RPC error response 00:21:38.928 GoRPCClient: error on JSON-RPC call 00:21:38.928 19:22:16 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:38.928 19:22:16 -- common/autotest_common.sh@641 -- # es=1 00:21:38.928 19:22:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:38.928 19:22:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:38.928 19:22:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:38.928 19:22:16 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:21:38.928 19:22:16 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:38.928 19:22:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.928 19:22:16 -- host/discovery.sh@67 -- # sort 00:21:38.928 19:22:16 -- common/autotest_common.sh@10 -- # set +x 00:21:38.928 19:22:16 -- host/discovery.sh@67 -- # xargs 00:21:38.928 19:22:16 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:38.928 19:22:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.928 19:22:16 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:21:38.928 19:22:16 -- host/discovery.sh@147 -- # get_bdev_list 00:21:38.928 19:22:16 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:38.928 19:22:16 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:38.928 19:22:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.928 19:22:16 -- common/autotest_common.sh@10 -- # set +x 00:21:38.928 19:22:16 -- host/discovery.sh@55 -- # sort 00:21:38.928 19:22:16 -- host/discovery.sh@55 -- # xargs 00:21:38.928 19:22:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.928 19:22:16 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:38.928 19:22:16 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:38.928 19:22:16 -- common/autotest_common.sh@638 -- # local es=0 00:21:38.928 19:22:16 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:38.928 19:22:16 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:38.928 19:22:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:38.928 19:22:16 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:38.928 19:22:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:38.928 19:22:16 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:38.928 19:22:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.928 19:22:16 -- common/autotest_common.sh@10 -- # set +x 00:21:38.928 2024/02/14 19:22:16 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:21:38.928 request: 00:21:38.928 { 00:21:38.928 "method": "bdev_nvme_start_discovery", 00:21:38.928 "params": { 00:21:38.928 "name": "nvme_second", 00:21:38.928 "trtype": "tcp", 00:21:38.928 "traddr": "10.0.0.2", 00:21:38.928 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:38.928 "adrfam": "ipv4", 00:21:38.928 "trsvcid": "8009", 00:21:38.928 "wait_for_attach": true 00:21:38.928 } 00:21:38.928 } 00:21:38.928 Got JSON-RPC error response 00:21:38.928 GoRPCClient: error on JSON-RPC call 00:21:38.928 19:22:16 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:38.928 19:22:16 -- common/autotest_common.sh@641 -- # es=1 00:21:38.928 19:22:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:38.928 19:22:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:38.928 19:22:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:38.928 19:22:16 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:21:38.928 19:22:16 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:38.929 19:22:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.929 19:22:16 -- common/autotest_common.sh@10 -- # set +x 00:21:38.929 19:22:16 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:38.929 19:22:16 -- host/discovery.sh@67 -- # sort 00:21:38.929 19:22:16 -- host/discovery.sh@67 -- # xargs 00:21:38.929 19:22:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.929 19:22:16 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:21:38.929 19:22:16 -- host/discovery.sh@153 -- # get_bdev_list 00:21:38.929 19:22:16 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:38.929 19:22:16 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:38.929 19:22:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.929 19:22:16 -- common/autotest_common.sh@10 -- # set +x 00:21:38.929 19:22:16 -- host/discovery.sh@55 -- # xargs 00:21:38.929 19:22:16 -- host/discovery.sh@55 -- # sort 00:21:38.929 19:22:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.929 19:22:16 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:38.929 19:22:16 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:38.929 19:22:16 -- common/autotest_common.sh@638 -- # local es=0 00:21:38.929 19:22:16 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:38.929 19:22:16 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:38.929 19:22:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:38.929 19:22:16 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:38.929 19:22:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:38.929 19:22:16 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:38.929 19:22:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.929 19:22:16 -- common/autotest_common.sh@10 -- # set +x 00:21:40.306 [2024-02-14 19:22:17.350042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:40.306 [2024-02-14 19:22:17.350118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:40.306 [2024-02-14 19:22:17.350138] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x698940 with addr=10.0.0.2, port=8010 00:21:40.306 [2024-02-14 19:22:17.350153] nvme_tcp.c:2594:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:40.306 [2024-02-14 19:22:17.350163] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:40.306 [2024-02-14 19:22:17.350172] bdev_nvme.c:6766:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:41.242 [2024-02-14 19:22:18.350029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:41.242 [2024-02-14 19:22:18.350098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:41.242 [2024-02-14 19:22:18.350117] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x698940 with addr=10.0.0.2, port=8010 00:21:41.242 [2024-02-14 19:22:18.350133] nvme_tcp.c:2594:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:41.242 [2024-02-14 19:22:18.350142] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:41.242 [2024-02-14 19:22:18.350151] bdev_nvme.c:6766:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:42.179 [2024-02-14 19:22:19.349958] bdev_nvme.c:6747:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:42.179 request: 00:21:42.179 { 00:21:42.179 "method": "bdev_nvme_start_discovery", 00:21:42.179 "params": { 00:21:42.179 "name": "nvme_second", 00:21:42.179 "trtype": "tcp", 00:21:42.179 "traddr": "10.0.0.2", 00:21:42.179 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:42.179 "adrfam": "ipv4", 00:21:42.179 "trsvcid": "8010", 00:21:42.179 "attach_timeout_ms": 3000 00:21:42.179 } 00:21:42.179 } 00:21:42.179 Got JSON-RPC error response 00:21:42.179 GoRPCClient: error on JSON-RPC call 00:21:42.179 2024/02/14 19:22:19 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:21:42.179 19:22:19 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:42.179 19:22:19 -- common/autotest_common.sh@641 -- # es=1 00:21:42.179 19:22:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:42.179 19:22:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:42.179 19:22:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:42.179 19:22:19 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:21:42.179 19:22:19 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:42.179 19:22:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.179 19:22:19 -- common/autotest_common.sh@10 -- # set +x 00:21:42.179 19:22:19 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:42.179 19:22:19 -- host/discovery.sh@67 -- # xargs 00:21:42.179 19:22:19 -- host/discovery.sh@67 -- # sort 00:21:42.179 19:22:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:42.179 19:22:19 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:21:42.179 19:22:19 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:21:42.179 19:22:19 -- host/discovery.sh@162 -- # kill 83903 00:21:42.179 19:22:19 -- host/discovery.sh@163 -- # nvmftestfini 00:21:42.179 19:22:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:42.179 19:22:19 -- nvmf/common.sh@116 -- # sync 00:21:42.179 19:22:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:42.180 19:22:19 -- nvmf/common.sh@119 -- # set +e 00:21:42.180 19:22:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:42.180 19:22:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:42.180 rmmod nvme_tcp 00:21:42.180 rmmod nvme_fabrics 00:21:42.180 rmmod nvme_keyring 00:21:42.180 19:22:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:42.180 19:22:19 -- nvmf/common.sh@123 -- # set -e 00:21:42.180 19:22:19 -- nvmf/common.sh@124 -- # return 0 00:21:42.180 19:22:19 -- nvmf/common.sh@477 -- # '[' -n 83853 ']' 00:21:42.180 19:22:19 -- nvmf/common.sh@478 -- # killprocess 83853 00:21:42.180 19:22:19 -- common/autotest_common.sh@924 -- # '[' -z 83853 ']' 00:21:42.180 19:22:19 -- common/autotest_common.sh@928 -- # kill -0 83853 00:21:42.180 19:22:19 -- common/autotest_common.sh@929 -- # uname 00:21:42.180 19:22:19 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:21:42.180 19:22:19 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 83853 00:21:42.180 killing process with pid 83853 00:21:42.180 19:22:19 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:21:42.180 19:22:19 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:21:42.180 19:22:19 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 83853' 00:21:42.180 19:22:19 -- common/autotest_common.sh@943 -- # kill 83853 00:21:42.180 19:22:19 -- common/autotest_common.sh@948 -- # wait 83853 00:21:42.439 19:22:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:42.439 19:22:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:42.439 19:22:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:42.439 19:22:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:42.439 19:22:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:42.439 19:22:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.439 19:22:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:42.439 19:22:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.439 19:22:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:42.439 00:21:42.439 real 0m13.908s 00:21:42.439 user 0m27.324s 00:21:42.439 sys 0m1.703s 00:21:42.439 19:22:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:42.439 ************************************ 00:21:42.439 END TEST nvmf_discovery 00:21:42.439 ************************************ 00:21:42.439 19:22:19 -- common/autotest_common.sh@10 -- # set +x 00:21:42.698 19:22:19 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:42.698 19:22:19 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:21:42.698 19:22:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:21:42.698 19:22:19 -- common/autotest_common.sh@10 -- # set +x 00:21:42.698 ************************************ 00:21:42.698 START TEST nvmf_discovery_remove_ifc 00:21:42.698 ************************************ 00:21:42.698 19:22:19 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:42.698 * Looking for test storage... 00:21:42.698 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:42.698 19:22:19 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:42.698 19:22:19 -- nvmf/common.sh@7 -- # uname -s 00:21:42.698 19:22:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:42.698 19:22:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:42.698 19:22:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:42.698 19:22:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:42.698 19:22:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:42.698 19:22:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:42.698 19:22:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:42.698 19:22:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:42.698 19:22:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:42.698 19:22:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:42.698 19:22:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:21:42.698 19:22:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:21:42.698 19:22:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:42.698 19:22:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:42.698 19:22:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:42.698 19:22:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:42.698 19:22:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:42.698 19:22:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:42.698 19:22:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:42.699 19:22:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.699 19:22:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.699 19:22:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.699 19:22:19 -- paths/export.sh@5 -- # export PATH 00:21:42.699 19:22:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.699 19:22:19 -- nvmf/common.sh@46 -- # : 0 00:21:42.699 19:22:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:42.699 19:22:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:42.699 19:22:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:42.699 19:22:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:42.699 19:22:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:42.699 19:22:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:42.699 19:22:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:42.699 19:22:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:42.699 19:22:19 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:21:42.699 19:22:19 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:21:42.699 19:22:19 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:21:42.699 19:22:19 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:42.699 19:22:19 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:21:42.699 19:22:19 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:21:42.699 19:22:19 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:21:42.699 19:22:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:42.699 19:22:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:42.699 19:22:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:42.699 19:22:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:42.699 19:22:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:42.699 19:22:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.699 19:22:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:42.699 19:22:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.699 19:22:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:42.699 19:22:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:42.699 19:22:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:42.699 19:22:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:42.699 19:22:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:42.699 19:22:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:42.699 19:22:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:42.699 19:22:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:42.699 19:22:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:42.699 19:22:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:42.699 19:22:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:42.699 19:22:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:42.699 19:22:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:42.699 19:22:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:42.699 19:22:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:42.699 19:22:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:42.699 19:22:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:42.699 19:22:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:42.699 19:22:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:42.699 19:22:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:42.699 Cannot find device "nvmf_tgt_br" 00:21:42.699 19:22:20 -- nvmf/common.sh@154 -- # true 00:21:42.699 19:22:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:42.699 Cannot find device "nvmf_tgt_br2" 00:21:42.699 19:22:20 -- nvmf/common.sh@155 -- # true 00:21:42.699 19:22:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:42.699 19:22:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:42.699 Cannot find device "nvmf_tgt_br" 00:21:42.699 19:22:20 -- nvmf/common.sh@157 -- # true 00:21:42.699 19:22:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:42.699 Cannot find device "nvmf_tgt_br2" 00:21:42.699 19:22:20 -- nvmf/common.sh@158 -- # true 00:21:42.699 19:22:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:42.699 19:22:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:42.699 19:22:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:42.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:42.958 19:22:20 -- nvmf/common.sh@161 -- # true 00:21:42.958 19:22:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:42.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:42.958 19:22:20 -- nvmf/common.sh@162 -- # true 00:21:42.958 19:22:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:42.958 19:22:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:42.958 19:22:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:42.958 19:22:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:42.958 19:22:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:42.958 19:22:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:42.958 19:22:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:42.958 19:22:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:42.958 19:22:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:42.958 19:22:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:42.958 19:22:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:42.958 19:22:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:42.958 19:22:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:42.958 19:22:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:42.958 19:22:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:42.958 19:22:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:42.958 19:22:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:42.958 19:22:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:42.958 19:22:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:42.958 19:22:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:42.958 19:22:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:42.958 19:22:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:42.958 19:22:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:42.958 19:22:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:42.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:42.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:21:42.958 00:21:42.958 --- 10.0.0.2 ping statistics --- 00:21:42.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.958 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:21:42.958 19:22:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:42.958 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:42.958 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:21:42.958 00:21:42.958 --- 10.0.0.3 ping statistics --- 00:21:42.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.958 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:21:42.958 19:22:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:42.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:42.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:21:42.958 00:21:42.958 --- 10.0.0.1 ping statistics --- 00:21:42.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.959 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:21:42.959 19:22:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:42.959 19:22:20 -- nvmf/common.sh@421 -- # return 0 00:21:42.959 19:22:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:42.959 19:22:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:42.959 19:22:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:42.959 19:22:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:42.959 19:22:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:42.959 19:22:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:42.959 19:22:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:42.959 19:22:20 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:21:42.959 19:22:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:42.959 19:22:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:42.959 19:22:20 -- common/autotest_common.sh@10 -- # set +x 00:21:42.959 19:22:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:42.959 19:22:20 -- nvmf/common.sh@469 -- # nvmfpid=84412 00:21:42.959 19:22:20 -- nvmf/common.sh@470 -- # waitforlisten 84412 00:21:42.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.959 19:22:20 -- common/autotest_common.sh@817 -- # '[' -z 84412 ']' 00:21:42.959 19:22:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.959 19:22:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:42.959 19:22:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.959 19:22:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:42.959 19:22:20 -- common/autotest_common.sh@10 -- # set +x 00:21:43.218 [2024-02-14 19:22:20.418637] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:21:43.218 [2024-02-14 19:22:20.418899] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:43.218 [2024-02-14 19:22:20.559678] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.477 [2024-02-14 19:22:20.666547] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:43.477 [2024-02-14 19:22:20.666718] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:43.477 [2024-02-14 19:22:20.666737] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:43.477 [2024-02-14 19:22:20.666749] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:43.477 [2024-02-14 19:22:20.666789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.044 19:22:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:44.044 19:22:21 -- common/autotest_common.sh@850 -- # return 0 00:21:44.044 19:22:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:44.044 19:22:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:44.044 19:22:21 -- common/autotest_common.sh@10 -- # set +x 00:21:44.044 19:22:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.044 19:22:21 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:21:44.044 19:22:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:44.044 19:22:21 -- common/autotest_common.sh@10 -- # set +x 00:21:44.044 [2024-02-14 19:22:21.436717] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.044 [2024-02-14 19:22:21.444823] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:44.044 null0 00:21:44.303 [2024-02-14 19:22:21.477128] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:44.303 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:44.303 19:22:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:44.303 19:22:21 -- host/discovery_remove_ifc.sh@59 -- # hostpid=84462 00:21:44.303 19:22:21 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:21:44.303 19:22:21 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 84462 /tmp/host.sock 00:21:44.303 19:22:21 -- common/autotest_common.sh@817 -- # '[' -z 84462 ']' 00:21:44.303 19:22:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:21:44.303 19:22:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:44.303 19:22:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:44.303 19:22:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:44.303 19:22:21 -- common/autotest_common.sh@10 -- # set +x 00:21:44.304 [2024-02-14 19:22:21.558280] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:21:44.304 [2024-02-14 19:22:21.558569] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84462 ] 00:21:44.304 [2024-02-14 19:22:21.697731] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.562 [2024-02-14 19:22:21.804505] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:44.562 [2024-02-14 19:22:21.805054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.130 19:22:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:45.130 19:22:22 -- common/autotest_common.sh@850 -- # return 0 00:21:45.130 19:22:22 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:45.130 19:22:22 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:21:45.130 19:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.130 19:22:22 -- common/autotest_common.sh@10 -- # set +x 00:21:45.130 19:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.130 19:22:22 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:21:45.130 19:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.130 19:22:22 -- common/autotest_common.sh@10 -- # set +x 00:21:45.390 19:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.390 19:22:22 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:21:45.390 19:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.390 19:22:22 -- common/autotest_common.sh@10 -- # set +x 00:21:46.327 [2024-02-14 19:22:23.589634] bdev_nvme.c:6704:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:46.327 [2024-02-14 19:22:23.589673] bdev_nvme.c:6784:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:46.327 [2024-02-14 19:22:23.589690] bdev_nvme.c:6667:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:46.327 [2024-02-14 19:22:23.675736] bdev_nvme.c:6633:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:46.327 [2024-02-14 19:22:23.733445] bdev_nvme.c:7493:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:46.327 [2024-02-14 19:22:23.733503] bdev_nvme.c:7493:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:46.327 [2024-02-14 19:22:23.733531] bdev_nvme.c:7493:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:46.327 [2024-02-14 19:22:23.733545] bdev_nvme.c:6523:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:46.327 [2024-02-14 19:22:23.733562] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:46.327 19:22:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:46.327 19:22:23 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:21:46.327 19:22:23 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:46.327 [2024-02-14 19:22:23.738246] bdev_nvme.c:1581:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xf1c750 was disconnected and freed. delete nvme_qpair. 00:21:46.327 19:22:23 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:46.327 19:22:23 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:46.327 19:22:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:46.327 19:22:23 -- common/autotest_common.sh@10 -- # set +x 00:21:46.327 19:22:23 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:46.327 19:22:23 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:46.585 19:22:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:46.585 19:22:23 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:21:46.585 19:22:23 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:21:46.585 19:22:23 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:21:46.585 19:22:23 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:21:46.585 19:22:23 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:46.585 19:22:23 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:46.585 19:22:23 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:46.586 19:22:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:46.586 19:22:23 -- common/autotest_common.sh@10 -- # set +x 00:21:46.586 19:22:23 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:46.586 19:22:23 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:46.586 19:22:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:46.586 19:22:23 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:46.586 19:22:23 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:47.523 19:22:24 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:47.523 19:22:24 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:47.523 19:22:24 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:47.523 19:22:24 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:47.523 19:22:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.523 19:22:24 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:47.523 19:22:24 -- common/autotest_common.sh@10 -- # set +x 00:21:47.523 19:22:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.523 19:22:24 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:47.523 19:22:24 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:48.899 19:22:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:48.899 19:22:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:48.899 19:22:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:48.899 19:22:25 -- common/autotest_common.sh@10 -- # set +x 00:21:48.899 19:22:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:48.899 19:22:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:48.899 19:22:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:48.899 19:22:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:48.899 19:22:25 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:48.899 19:22:25 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:49.833 19:22:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:49.833 19:22:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:49.833 19:22:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.833 19:22:26 -- common/autotest_common.sh@10 -- # set +x 00:21:49.833 19:22:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:49.833 19:22:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:49.833 19:22:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:49.833 19:22:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:49.833 19:22:27 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:49.833 19:22:27 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:50.767 19:22:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:50.767 19:22:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:50.767 19:22:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.767 19:22:28 -- common/autotest_common.sh@10 -- # set +x 00:21:50.767 19:22:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:50.767 19:22:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:50.767 19:22:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:50.767 19:22:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.767 19:22:28 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:50.767 19:22:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:51.702 19:22:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:51.702 19:22:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:51.702 19:22:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:51.702 19:22:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:51.702 19:22:29 -- common/autotest_common.sh@10 -- # set +x 00:21:51.702 19:22:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:51.702 19:22:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:51.959 19:22:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:51.959 [2024-02-14 19:22:29.161528] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:21:51.959 [2024-02-14 19:22:29.161590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.959 [2024-02-14 19:22:29.161604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.959 [2024-02-14 19:22:29.161614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.959 [2024-02-14 19:22:29.161622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.959 [2024-02-14 19:22:29.161631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.960 [2024-02-14 19:22:29.161639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.960 [2024-02-14 19:22:29.161646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.960 [2024-02-14 19:22:29.161654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.960 [2024-02-14 19:22:29.161662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.960 [2024-02-14 19:22:29.161670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.960 [2024-02-14 19:22:29.161677] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5eb0 is same with the state(5) to be set 00:21:51.960 19:22:29 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:51.960 19:22:29 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:51.960 [2024-02-14 19:22:29.171532] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee5eb0 (9): Bad file descriptor 00:21:51.960 [2024-02-14 19:22:29.181541] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:52.896 19:22:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:52.896 19:22:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:52.896 19:22:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:52.896 19:22:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.896 19:22:30 -- common/autotest_common.sh@10 -- # set +x 00:21:52.896 19:22:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:52.896 19:22:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:52.896 [2024-02-14 19:22:30.210611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:21:53.832 [2024-02-14 19:22:31.234622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:21:53.832 [2024-02-14 19:22:31.234988] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xee5eb0 with addr=10.0.0.2, port=4420 00:21:53.832 [2024-02-14 19:22:31.235273] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5eb0 is same with the state(5) to be set 00:21:53.832 [2024-02-14 19:22:31.236031] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee5eb0 (9): Bad file descriptor 00:21:53.832 [2024-02-14 19:22:31.236102] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:53.832 [2024-02-14 19:22:31.236153] bdev_nvme.c:6455:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:21:53.833 [2024-02-14 19:22:31.236216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:53.833 [2024-02-14 19:22:31.236244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.833 [2024-02-14 19:22:31.236267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:53.833 [2024-02-14 19:22:31.236286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.833 [2024-02-14 19:22:31.236310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:53.833 [2024-02-14 19:22:31.236330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.833 [2024-02-14 19:22:31.236350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:53.833 [2024-02-14 19:22:31.236369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.833 [2024-02-14 19:22:31.236390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:53.833 [2024-02-14 19:22:31.236410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.833 [2024-02-14 19:22:31.236430] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:21:53.833 [2024-02-14 19:22:31.236511] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8d430 (9): Bad file descriptor 00:21:53.833 [2024-02-14 19:22:31.237482] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:21:53.833 [2024-02-14 19:22:31.237539] nvme_ctrlr.c:1135:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:21:54.092 19:22:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:54.092 19:22:31 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:54.092 19:22:31 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:55.045 19:22:32 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:55.045 19:22:32 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:55.045 19:22:32 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:55.045 19:22:32 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:55.045 19:22:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.045 19:22:32 -- common/autotest_common.sh@10 -- # set +x 00:21:55.045 19:22:32 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:55.045 19:22:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:55.045 19:22:32 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:21:55.045 19:22:32 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:55.045 19:22:32 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:55.045 19:22:32 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:21:55.045 19:22:32 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:55.045 19:22:32 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:55.045 19:22:32 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:55.045 19:22:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.045 19:22:32 -- common/autotest_common.sh@10 -- # set +x 00:21:55.045 19:22:32 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:55.045 19:22:32 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:55.045 19:22:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:55.045 19:22:32 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:55.045 19:22:32 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:55.981 [2024-02-14 19:22:33.240988] bdev_nvme.c:6704:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:55.981 [2024-02-14 19:22:33.241009] bdev_nvme.c:6784:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:55.981 [2024-02-14 19:22:33.241024] bdev_nvme.c:6667:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:55.981 [2024-02-14 19:22:33.327069] bdev_nvme.c:6633:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:21:55.981 [2024-02-14 19:22:33.381579] bdev_nvme.c:7493:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:55.981 [2024-02-14 19:22:33.381615] bdev_nvme.c:7493:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:55.981 [2024-02-14 19:22:33.381634] bdev_nvme.c:7493:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:55.981 [2024-02-14 19:22:33.381646] bdev_nvme.c:6523:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:21:55.981 [2024-02-14 19:22:33.381653] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:55.981 [2024-02-14 19:22:33.389506] bdev_nvme.c:1581:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xed6780 was disconnected and freed. delete nvme_qpair. 00:21:56.240 19:22:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:56.240 19:22:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:56.240 19:22:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.240 19:22:33 -- common/autotest_common.sh@10 -- # set +x 00:21:56.240 19:22:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:56.240 19:22:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:56.240 19:22:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:56.240 19:22:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.240 19:22:33 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:21:56.240 19:22:33 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:21:56.240 19:22:33 -- host/discovery_remove_ifc.sh@90 -- # killprocess 84462 00:21:56.240 19:22:33 -- common/autotest_common.sh@924 -- # '[' -z 84462 ']' 00:21:56.240 19:22:33 -- common/autotest_common.sh@928 -- # kill -0 84462 00:21:56.240 19:22:33 -- common/autotest_common.sh@929 -- # uname 00:21:56.240 19:22:33 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:21:56.240 19:22:33 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 84462 00:21:56.240 killing process with pid 84462 00:21:56.240 19:22:33 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:21:56.240 19:22:33 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:21:56.240 19:22:33 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 84462' 00:21:56.240 19:22:33 -- common/autotest_common.sh@943 -- # kill 84462 00:21:56.240 19:22:33 -- common/autotest_common.sh@948 -- # wait 84462 00:21:56.499 19:22:33 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:21:56.499 19:22:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:56.499 19:22:33 -- nvmf/common.sh@116 -- # sync 00:21:56.499 19:22:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:56.499 19:22:33 -- nvmf/common.sh@119 -- # set +e 00:21:56.499 19:22:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:56.499 19:22:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:56.499 rmmod nvme_tcp 00:21:56.499 rmmod nvme_fabrics 00:21:56.499 rmmod nvme_keyring 00:21:56.499 19:22:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:56.499 19:22:33 -- nvmf/common.sh@123 -- # set -e 00:21:56.499 19:22:33 -- nvmf/common.sh@124 -- # return 0 00:21:56.499 19:22:33 -- nvmf/common.sh@477 -- # '[' -n 84412 ']' 00:21:56.499 19:22:33 -- nvmf/common.sh@478 -- # killprocess 84412 00:21:56.499 19:22:33 -- common/autotest_common.sh@924 -- # '[' -z 84412 ']' 00:21:56.499 19:22:33 -- common/autotest_common.sh@928 -- # kill -0 84412 00:21:56.499 19:22:33 -- common/autotest_common.sh@929 -- # uname 00:21:56.499 19:22:33 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:21:56.499 19:22:33 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 84412 00:21:56.499 killing process with pid 84412 00:21:56.499 19:22:33 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:21:56.499 19:22:33 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:21:56.499 19:22:33 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 84412' 00:21:56.499 19:22:33 -- common/autotest_common.sh@943 -- # kill 84412 00:21:56.499 19:22:33 -- common/autotest_common.sh@948 -- # wait 84412 00:21:56.802 19:22:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:56.802 19:22:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:56.802 19:22:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:56.802 19:22:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:56.802 19:22:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:56.802 19:22:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.802 19:22:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:56.802 19:22:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.802 19:22:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:56.802 00:21:56.802 real 0m14.281s 00:21:56.802 user 0m24.477s 00:21:56.802 sys 0m1.558s 00:21:56.802 19:22:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:56.802 19:22:34 -- common/autotest_common.sh@10 -- # set +x 00:21:56.802 ************************************ 00:21:56.802 END TEST nvmf_discovery_remove_ifc 00:21:56.802 ************************************ 00:21:57.081 19:22:34 -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:21:57.082 19:22:34 -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:57.082 19:22:34 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:21:57.082 19:22:34 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:21:57.082 19:22:34 -- common/autotest_common.sh@10 -- # set +x 00:21:57.082 ************************************ 00:21:57.082 START TEST nvmf_digest 00:21:57.082 ************************************ 00:21:57.082 19:22:34 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:57.082 * Looking for test storage... 00:21:57.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:57.082 19:22:34 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:57.082 19:22:34 -- nvmf/common.sh@7 -- # uname -s 00:21:57.082 19:22:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.082 19:22:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.082 19:22:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.082 19:22:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.082 19:22:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.082 19:22:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.082 19:22:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.082 19:22:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.082 19:22:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.082 19:22:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.082 19:22:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:21:57.082 19:22:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:21:57.082 19:22:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.082 19:22:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.082 19:22:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:57.082 19:22:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:57.082 19:22:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.082 19:22:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.082 19:22:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.082 19:22:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.082 19:22:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.082 19:22:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.082 19:22:34 -- paths/export.sh@5 -- # export PATH 00:21:57.082 19:22:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.082 19:22:34 -- nvmf/common.sh@46 -- # : 0 00:21:57.082 19:22:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:57.082 19:22:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:57.082 19:22:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:57.082 19:22:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.082 19:22:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.082 19:22:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:57.082 19:22:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:57.082 19:22:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:57.082 19:22:34 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:21:57.082 19:22:34 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:21:57.082 19:22:34 -- host/digest.sh@16 -- # runtime=2 00:21:57.082 19:22:34 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:21:57.082 19:22:34 -- host/digest.sh@132 -- # nvmftestinit 00:21:57.082 19:22:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:57.082 19:22:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.082 19:22:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:57.082 19:22:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:57.082 19:22:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:57.082 19:22:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.082 19:22:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:57.082 19:22:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.082 19:22:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:57.082 19:22:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:57.082 19:22:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:57.082 19:22:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:57.082 19:22:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:57.082 19:22:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:57.082 19:22:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:57.082 19:22:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:57.082 19:22:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:57.082 19:22:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:57.082 19:22:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:57.082 19:22:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:57.082 19:22:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:57.082 19:22:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:57.082 19:22:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:57.082 19:22:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:57.082 19:22:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:57.082 19:22:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:57.082 19:22:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:57.082 19:22:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:57.082 Cannot find device "nvmf_tgt_br" 00:21:57.082 19:22:34 -- nvmf/common.sh@154 -- # true 00:21:57.082 19:22:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:57.082 Cannot find device "nvmf_tgt_br2" 00:21:57.082 19:22:34 -- nvmf/common.sh@155 -- # true 00:21:57.082 19:22:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:57.082 19:22:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:57.082 Cannot find device "nvmf_tgt_br" 00:21:57.082 19:22:34 -- nvmf/common.sh@157 -- # true 00:21:57.082 19:22:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:57.082 Cannot find device "nvmf_tgt_br2" 00:21:57.082 19:22:34 -- nvmf/common.sh@158 -- # true 00:21:57.082 19:22:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:57.082 19:22:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:57.082 19:22:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:57.082 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:57.082 19:22:34 -- nvmf/common.sh@161 -- # true 00:21:57.082 19:22:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:57.082 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:57.082 19:22:34 -- nvmf/common.sh@162 -- # true 00:21:57.082 19:22:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:57.082 19:22:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:57.082 19:22:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:57.082 19:22:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:57.082 19:22:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:57.082 19:22:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:57.342 19:22:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:57.342 19:22:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:57.342 19:22:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:57.342 19:22:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:57.342 19:22:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:57.342 19:22:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:57.342 19:22:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:57.342 19:22:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:57.342 19:22:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:57.342 19:22:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:57.342 19:22:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:57.342 19:22:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:57.342 19:22:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:57.342 19:22:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:57.342 19:22:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:57.342 19:22:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:57.342 19:22:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:57.342 19:22:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:57.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:57.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:21:57.342 00:21:57.342 --- 10.0.0.2 ping statistics --- 00:21:57.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.342 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:21:57.342 19:22:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:57.342 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:57.342 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:21:57.342 00:21:57.342 --- 10.0.0.3 ping statistics --- 00:21:57.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.342 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:21:57.342 19:22:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:57.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:57.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:21:57.342 00:21:57.342 --- 10.0.0.1 ping statistics --- 00:21:57.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.342 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:21:57.342 19:22:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:57.342 19:22:34 -- nvmf/common.sh@421 -- # return 0 00:21:57.342 19:22:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:57.342 19:22:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:57.342 19:22:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:57.342 19:22:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:57.342 19:22:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:57.342 19:22:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:57.342 19:22:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:57.342 19:22:34 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:57.342 19:22:34 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:21:57.342 19:22:34 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:21:57.342 19:22:34 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:21:57.342 19:22:34 -- common/autotest_common.sh@10 -- # set +x 00:21:57.342 ************************************ 00:21:57.342 START TEST nvmf_digest_clean 00:21:57.342 ************************************ 00:21:57.342 19:22:34 -- common/autotest_common.sh@1102 -- # run_digest 00:21:57.342 19:22:34 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:21:57.342 19:22:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:57.342 19:22:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:57.342 19:22:34 -- common/autotest_common.sh@10 -- # set +x 00:21:57.342 19:22:34 -- nvmf/common.sh@469 -- # nvmfpid=84868 00:21:57.342 19:22:34 -- nvmf/common.sh@470 -- # waitforlisten 84868 00:21:57.342 19:22:34 -- common/autotest_common.sh@817 -- # '[' -z 84868 ']' 00:21:57.342 19:22:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.342 19:22:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:57.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.342 19:22:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.342 19:22:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:57.342 19:22:34 -- common/autotest_common.sh@10 -- # set +x 00:21:57.342 19:22:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:57.342 [2024-02-14 19:22:34.749932] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:21:57.342 [2024-02-14 19:22:34.750033] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:57.601 [2024-02-14 19:22:34.892274] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.601 [2024-02-14 19:22:35.002362] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:57.601 [2024-02-14 19:22:35.002585] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.601 [2024-02-14 19:22:35.002605] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.601 [2024-02-14 19:22:35.002618] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.601 [2024-02-14 19:22:35.002659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.168 19:22:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:58.168 19:22:35 -- common/autotest_common.sh@850 -- # return 0 00:21:58.168 19:22:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:58.168 19:22:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:58.168 19:22:35 -- common/autotest_common.sh@10 -- # set +x 00:21:58.427 19:22:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:58.427 19:22:35 -- host/digest.sh@120 -- # common_target_config 00:21:58.427 19:22:35 -- host/digest.sh@43 -- # rpc_cmd 00:21:58.427 19:22:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.427 19:22:35 -- common/autotest_common.sh@10 -- # set +x 00:21:58.427 null0 00:21:58.427 [2024-02-14 19:22:35.749331] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.427 [2024-02-14 19:22:35.773462] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.427 19:22:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.427 19:22:35 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:21:58.427 19:22:35 -- host/digest.sh@77 -- # local rw bs qd 00:21:58.427 19:22:35 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:58.427 19:22:35 -- host/digest.sh@80 -- # rw=randread 00:21:58.427 19:22:35 -- host/digest.sh@80 -- # bs=4096 00:21:58.427 19:22:35 -- host/digest.sh@80 -- # qd=128 00:21:58.427 19:22:35 -- host/digest.sh@82 -- # bperfpid=84918 00:21:58.427 19:22:35 -- host/digest.sh@83 -- # waitforlisten 84918 /var/tmp/bperf.sock 00:21:58.427 19:22:35 -- common/autotest_common.sh@817 -- # '[' -z 84918 ']' 00:21:58.427 19:22:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:58.427 19:22:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:58.427 19:22:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:58.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:58.427 19:22:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:58.427 19:22:35 -- common/autotest_common.sh@10 -- # set +x 00:21:58.427 19:22:35 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:58.427 [2024-02-14 19:22:35.836346] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:21:58.427 [2024-02-14 19:22:35.836434] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84918 ] 00:21:58.686 [2024-02-14 19:22:35.975748] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.686 [2024-02-14 19:22:36.065957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.623 19:22:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:59.623 19:22:36 -- common/autotest_common.sh@850 -- # return 0 00:21:59.623 19:22:36 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:21:59.623 19:22:36 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:21:59.623 19:22:36 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:59.882 19:22:37 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:59.882 19:22:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:00.141 nvme0n1 00:22:00.141 19:22:37 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:00.141 19:22:37 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:00.141 Running I/O for 2 seconds... 00:22:02.677 00:22:02.677 Latency(us) 00:22:02.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.677 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:02.677 nvme0n1 : 2.00 24067.78 94.01 0.00 0.00 5312.31 2427.81 11617.75 00:22:02.677 =================================================================================================================== 00:22:02.677 Total : 24067.78 94.01 0.00 0.00 5312.31 2427.81 11617.75 00:22:02.677 0 00:22:02.677 19:22:39 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:02.677 19:22:39 -- host/digest.sh@92 -- # get_accel_stats 00:22:02.677 19:22:39 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:02.677 19:22:39 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:02.677 | select(.opcode=="crc32c") 00:22:02.677 | "\(.module_name) \(.executed)"' 00:22:02.677 19:22:39 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:02.677 19:22:39 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:02.677 19:22:39 -- host/digest.sh@93 -- # exp_module=software 00:22:02.677 19:22:39 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:02.677 19:22:39 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:02.677 19:22:39 -- host/digest.sh@97 -- # killprocess 84918 00:22:02.677 19:22:39 -- common/autotest_common.sh@924 -- # '[' -z 84918 ']' 00:22:02.677 19:22:39 -- common/autotest_common.sh@928 -- # kill -0 84918 00:22:02.677 19:22:39 -- common/autotest_common.sh@929 -- # uname 00:22:02.677 19:22:39 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:22:02.677 19:22:39 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 84918 00:22:02.677 19:22:39 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:22:02.677 19:22:39 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:22:02.677 killing process with pid 84918 00:22:02.677 19:22:39 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 84918' 00:22:02.677 Received shutdown signal, test time was about 2.000000 seconds 00:22:02.677 00:22:02.677 Latency(us) 00:22:02.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.677 =================================================================================================================== 00:22:02.677 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:02.677 19:22:39 -- common/autotest_common.sh@943 -- # kill 84918 00:22:02.677 19:22:39 -- common/autotest_common.sh@948 -- # wait 84918 00:22:02.677 19:22:39 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:22:02.677 19:22:39 -- host/digest.sh@77 -- # local rw bs qd 00:22:02.677 19:22:39 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:02.677 19:22:39 -- host/digest.sh@80 -- # rw=randread 00:22:02.677 19:22:39 -- host/digest.sh@80 -- # bs=131072 00:22:02.677 19:22:39 -- host/digest.sh@80 -- # qd=16 00:22:02.677 19:22:39 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:02.677 19:22:39 -- host/digest.sh@82 -- # bperfpid=85008 00:22:02.677 19:22:39 -- host/digest.sh@83 -- # waitforlisten 85008 /var/tmp/bperf.sock 00:22:02.677 19:22:39 -- common/autotest_common.sh@817 -- # '[' -z 85008 ']' 00:22:02.677 19:22:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:02.677 19:22:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:02.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:02.677 19:22:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:02.677 19:22:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:02.677 19:22:39 -- common/autotest_common.sh@10 -- # set +x 00:22:02.677 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:02.677 Zero copy mechanism will not be used. 00:22:02.677 [2024-02-14 19:22:40.036813] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:22:02.677 [2024-02-14 19:22:40.036899] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85008 ] 00:22:02.936 [2024-02-14 19:22:40.165137] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.936 [2024-02-14 19:22:40.240277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.873 19:22:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:03.873 19:22:41 -- common/autotest_common.sh@850 -- # return 0 00:22:03.873 19:22:41 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:03.873 19:22:41 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:03.873 19:22:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:04.132 19:22:41 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:04.132 19:22:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:04.390 nvme0n1 00:22:04.390 19:22:41 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:04.390 19:22:41 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:04.390 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:04.390 Zero copy mechanism will not be used. 00:22:04.390 Running I/O for 2 seconds... 00:22:06.924 00:22:06.924 Latency(us) 00:22:06.924 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.924 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:06.924 nvme0n1 : 2.04 9025.83 1128.23 0.00 0.00 1736.94 580.89 42657.98 00:22:06.924 =================================================================================================================== 00:22:06.924 Total : 9025.83 1128.23 0.00 0.00 1736.94 580.89 42657.98 00:22:06.924 0 00:22:06.924 19:22:43 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:06.924 19:22:43 -- host/digest.sh@92 -- # get_accel_stats 00:22:06.924 19:22:43 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:06.924 19:22:43 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:06.924 | select(.opcode=="crc32c") 00:22:06.924 | "\(.module_name) \(.executed)"' 00:22:06.924 19:22:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:06.924 19:22:44 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:06.924 19:22:44 -- host/digest.sh@93 -- # exp_module=software 00:22:06.924 19:22:44 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:06.924 19:22:44 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:06.924 19:22:44 -- host/digest.sh@97 -- # killprocess 85008 00:22:06.924 19:22:44 -- common/autotest_common.sh@924 -- # '[' -z 85008 ']' 00:22:06.924 19:22:44 -- common/autotest_common.sh@928 -- # kill -0 85008 00:22:06.924 19:22:44 -- common/autotest_common.sh@929 -- # uname 00:22:06.924 19:22:44 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:22:06.924 19:22:44 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 85008 00:22:06.924 19:22:44 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:22:06.924 19:22:44 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:22:06.924 killing process with pid 85008 00:22:06.924 19:22:44 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 85008' 00:22:06.924 Received shutdown signal, test time was about 2.000000 seconds 00:22:06.924 00:22:06.924 Latency(us) 00:22:06.924 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.924 =================================================================================================================== 00:22:06.924 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:06.925 19:22:44 -- common/autotest_common.sh@943 -- # kill 85008 00:22:06.925 19:22:44 -- common/autotest_common.sh@948 -- # wait 85008 00:22:06.925 19:22:44 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:22:06.925 19:22:44 -- host/digest.sh@77 -- # local rw bs qd 00:22:06.925 19:22:44 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:06.925 19:22:44 -- host/digest.sh@80 -- # rw=randwrite 00:22:06.925 19:22:44 -- host/digest.sh@80 -- # bs=4096 00:22:06.925 19:22:44 -- host/digest.sh@80 -- # qd=128 00:22:06.925 19:22:44 -- host/digest.sh@82 -- # bperfpid=85093 00:22:06.925 19:22:44 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:06.925 19:22:44 -- host/digest.sh@83 -- # waitforlisten 85093 /var/tmp/bperf.sock 00:22:06.925 19:22:44 -- common/autotest_common.sh@817 -- # '[' -z 85093 ']' 00:22:06.925 19:22:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:06.925 19:22:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:06.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:06.925 19:22:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:06.925 19:22:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:06.925 19:22:44 -- common/autotest_common.sh@10 -- # set +x 00:22:07.184 [2024-02-14 19:22:44.374619] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:22:07.184 [2024-02-14 19:22:44.374705] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85093 ] 00:22:07.184 [2024-02-14 19:22:44.503938] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.442 [2024-02-14 19:22:44.608954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.010 19:22:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:08.010 19:22:45 -- common/autotest_common.sh@850 -- # return 0 00:22:08.010 19:22:45 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:08.010 19:22:45 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:08.010 19:22:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:08.268 19:22:45 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:08.268 19:22:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:08.526 nvme0n1 00:22:08.526 19:22:45 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:08.526 19:22:45 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:08.784 Running I/O for 2 seconds... 00:22:10.685 00:22:10.685 Latency(us) 00:22:10.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.685 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:10.685 nvme0n1 : 2.01 29191.49 114.03 0.00 0.00 4380.67 1854.37 7864.32 00:22:10.685 =================================================================================================================== 00:22:10.685 Total : 29191.49 114.03 0.00 0.00 4380.67 1854.37 7864.32 00:22:10.685 0 00:22:10.685 19:22:47 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:10.685 19:22:47 -- host/digest.sh@92 -- # get_accel_stats 00:22:10.685 19:22:47 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:10.685 19:22:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:10.685 19:22:47 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:10.685 | select(.opcode=="crc32c") 00:22:10.685 | "\(.module_name) \(.executed)"' 00:22:10.944 19:22:48 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:10.944 19:22:48 -- host/digest.sh@93 -- # exp_module=software 00:22:10.944 19:22:48 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:10.944 19:22:48 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:10.944 19:22:48 -- host/digest.sh@97 -- # killprocess 85093 00:22:10.944 19:22:48 -- common/autotest_common.sh@924 -- # '[' -z 85093 ']' 00:22:10.944 19:22:48 -- common/autotest_common.sh@928 -- # kill -0 85093 00:22:10.944 19:22:48 -- common/autotest_common.sh@929 -- # uname 00:22:10.944 19:22:48 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:22:10.944 19:22:48 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 85093 00:22:10.944 19:22:48 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:22:10.944 19:22:48 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:22:10.944 killing process with pid 85093 00:22:10.944 19:22:48 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 85093' 00:22:10.944 Received shutdown signal, test time was about 2.000000 seconds 00:22:10.944 00:22:10.944 Latency(us) 00:22:10.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.944 =================================================================================================================== 00:22:10.944 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:10.944 19:22:48 -- common/autotest_common.sh@943 -- # kill 85093 00:22:10.944 19:22:48 -- common/autotest_common.sh@948 -- # wait 85093 00:22:11.203 19:22:48 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:22:11.203 19:22:48 -- host/digest.sh@77 -- # local rw bs qd 00:22:11.203 19:22:48 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:11.203 19:22:48 -- host/digest.sh@80 -- # rw=randwrite 00:22:11.203 19:22:48 -- host/digest.sh@80 -- # bs=131072 00:22:11.203 19:22:48 -- host/digest.sh@80 -- # qd=16 00:22:11.203 19:22:48 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:11.203 19:22:48 -- host/digest.sh@82 -- # bperfpid=85183 00:22:11.203 19:22:48 -- host/digest.sh@83 -- # waitforlisten 85183 /var/tmp/bperf.sock 00:22:11.203 19:22:48 -- common/autotest_common.sh@817 -- # '[' -z 85183 ']' 00:22:11.203 19:22:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:11.203 19:22:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:11.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:11.203 19:22:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:11.203 19:22:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:11.203 19:22:48 -- common/autotest_common.sh@10 -- # set +x 00:22:11.203 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:11.203 Zero copy mechanism will not be used. 00:22:11.203 [2024-02-14 19:22:48.524190] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:22:11.203 [2024-02-14 19:22:48.524267] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85183 ] 00:22:11.462 [2024-02-14 19:22:48.648528] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.462 [2024-02-14 19:22:48.726695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.030 19:22:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:12.030 19:22:49 -- common/autotest_common.sh@850 -- # return 0 00:22:12.030 19:22:49 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:12.030 19:22:49 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:12.030 19:22:49 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:12.289 19:22:49 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:12.289 19:22:49 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:12.547 nvme0n1 00:22:12.547 19:22:49 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:12.547 19:22:49 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:12.806 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:12.806 Zero copy mechanism will not be used. 00:22:12.806 Running I/O for 2 seconds... 00:22:14.709 00:22:14.709 Latency(us) 00:22:14.709 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.709 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:14.709 nvme0n1 : 2.00 8007.46 1000.93 0.00 0.00 1994.02 1630.95 5600.35 00:22:14.709 =================================================================================================================== 00:22:14.709 Total : 8007.46 1000.93 0.00 0.00 1994.02 1630.95 5600.35 00:22:14.709 0 00:22:14.709 19:22:52 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:14.709 19:22:52 -- host/digest.sh@92 -- # get_accel_stats 00:22:14.709 19:22:52 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:14.709 19:22:52 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:14.709 | select(.opcode=="crc32c") 00:22:14.709 | "\(.module_name) \(.executed)"' 00:22:14.709 19:22:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:14.969 19:22:52 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:14.969 19:22:52 -- host/digest.sh@93 -- # exp_module=software 00:22:14.969 19:22:52 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:14.969 19:22:52 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:14.969 19:22:52 -- host/digest.sh@97 -- # killprocess 85183 00:22:14.969 19:22:52 -- common/autotest_common.sh@924 -- # '[' -z 85183 ']' 00:22:14.969 19:22:52 -- common/autotest_common.sh@928 -- # kill -0 85183 00:22:14.969 19:22:52 -- common/autotest_common.sh@929 -- # uname 00:22:14.969 19:22:52 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:22:14.969 19:22:52 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 85183 00:22:14.969 19:22:52 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:22:14.969 19:22:52 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:22:14.969 killing process with pid 85183 00:22:14.969 19:22:52 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 85183' 00:22:14.969 Received shutdown signal, test time was about 2.000000 seconds 00:22:14.969 00:22:14.969 Latency(us) 00:22:14.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.969 =================================================================================================================== 00:22:14.969 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:14.969 19:22:52 -- common/autotest_common.sh@943 -- # kill 85183 00:22:14.969 19:22:52 -- common/autotest_common.sh@948 -- # wait 85183 00:22:15.228 19:22:52 -- host/digest.sh@126 -- # killprocess 84868 00:22:15.228 19:22:52 -- common/autotest_common.sh@924 -- # '[' -z 84868 ']' 00:22:15.228 19:22:52 -- common/autotest_common.sh@928 -- # kill -0 84868 00:22:15.228 19:22:52 -- common/autotest_common.sh@929 -- # uname 00:22:15.228 19:22:52 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:22:15.228 19:22:52 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 84868 00:22:15.228 19:22:52 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:22:15.228 19:22:52 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:22:15.228 killing process with pid 84868 00:22:15.228 19:22:52 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 84868' 00:22:15.228 19:22:52 -- common/autotest_common.sh@943 -- # kill 84868 00:22:15.228 19:22:52 -- common/autotest_common.sh@948 -- # wait 84868 00:22:15.488 00:22:15.488 real 0m18.182s 00:22:15.488 user 0m33.112s 00:22:15.488 sys 0m5.250s 00:22:15.488 19:22:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:15.488 19:22:52 -- common/autotest_common.sh@10 -- # set +x 00:22:15.488 ************************************ 00:22:15.488 END TEST nvmf_digest_clean 00:22:15.488 ************************************ 00:22:15.747 19:22:52 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:22:15.747 19:22:52 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:22:15.747 19:22:52 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:22:15.747 19:22:52 -- common/autotest_common.sh@10 -- # set +x 00:22:15.747 ************************************ 00:22:15.747 START TEST nvmf_digest_error 00:22:15.747 ************************************ 00:22:15.747 19:22:52 -- common/autotest_common.sh@1102 -- # run_digest_error 00:22:15.747 19:22:52 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:22:15.747 19:22:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:15.747 19:22:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:15.747 19:22:52 -- common/autotest_common.sh@10 -- # set +x 00:22:15.747 19:22:52 -- nvmf/common.sh@469 -- # nvmfpid=85301 00:22:15.747 19:22:52 -- nvmf/common.sh@470 -- # waitforlisten 85301 00:22:15.747 19:22:52 -- common/autotest_common.sh@817 -- # '[' -z 85301 ']' 00:22:15.747 19:22:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.747 19:22:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:15.747 19:22:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:15.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.747 19:22:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.747 19:22:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:15.747 19:22:52 -- common/autotest_common.sh@10 -- # set +x 00:22:15.747 [2024-02-14 19:22:52.970280] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:22:15.747 [2024-02-14 19:22:52.970353] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:15.747 [2024-02-14 19:22:53.103120] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.006 [2024-02-14 19:22:53.189779] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:16.006 [2024-02-14 19:22:53.189922] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.006 [2024-02-14 19:22:53.189934] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.006 [2024-02-14 19:22:53.189942] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.006 [2024-02-14 19:22:53.189971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.572 19:22:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:16.572 19:22:53 -- common/autotest_common.sh@850 -- # return 0 00:22:16.572 19:22:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:16.572 19:22:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:16.572 19:22:53 -- common/autotest_common.sh@10 -- # set +x 00:22:16.572 19:22:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:16.572 19:22:53 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:22:16.572 19:22:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.572 19:22:53 -- common/autotest_common.sh@10 -- # set +x 00:22:16.572 [2024-02-14 19:22:53.966449] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:22:16.572 19:22:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.572 19:22:53 -- host/digest.sh@104 -- # common_target_config 00:22:16.572 19:22:53 -- host/digest.sh@43 -- # rpc_cmd 00:22:16.572 19:22:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.572 19:22:53 -- common/autotest_common.sh@10 -- # set +x 00:22:16.832 null0 00:22:16.832 [2024-02-14 19:22:54.100729] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:16.832 [2024-02-14 19:22:54.124878] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:16.832 19:22:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.832 19:22:54 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:22:16.832 19:22:54 -- host/digest.sh@54 -- # local rw bs qd 00:22:16.832 19:22:54 -- host/digest.sh@56 -- # rw=randread 00:22:16.832 19:22:54 -- host/digest.sh@56 -- # bs=4096 00:22:16.832 19:22:54 -- host/digest.sh@56 -- # qd=128 00:22:16.832 19:22:54 -- host/digest.sh@58 -- # bperfpid=85342 00:22:16.832 19:22:54 -- host/digest.sh@60 -- # waitforlisten 85342 /var/tmp/bperf.sock 00:22:16.832 19:22:54 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:22:16.832 19:22:54 -- common/autotest_common.sh@817 -- # '[' -z 85342 ']' 00:22:16.832 19:22:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:16.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:16.832 19:22:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:16.832 19:22:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:16.832 19:22:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:16.832 19:22:54 -- common/autotest_common.sh@10 -- # set +x 00:22:16.832 [2024-02-14 19:22:54.175465] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:22:16.832 [2024-02-14 19:22:54.175570] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85342 ] 00:22:17.091 [2024-02-14 19:22:54.310793] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.091 [2024-02-14 19:22:54.415956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.028 19:22:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:18.028 19:22:55 -- common/autotest_common.sh@850 -- # return 0 00:22:18.028 19:22:55 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:18.028 19:22:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:18.028 19:22:55 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:18.028 19:22:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:18.028 19:22:55 -- common/autotest_common.sh@10 -- # set +x 00:22:18.028 19:22:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:18.028 19:22:55 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:18.028 19:22:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:18.288 nvme0n1 00:22:18.288 19:22:55 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:18.288 19:22:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:18.288 19:22:55 -- common/autotest_common.sh@10 -- # set +x 00:22:18.288 19:22:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:18.288 19:22:55 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:18.288 19:22:55 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:18.288 Running I/O for 2 seconds... 00:22:18.288 [2024-02-14 19:22:55.684343] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.288 [2024-02-14 19:22:55.684385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.288 [2024-02-14 19:22:55.684415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.288 [2024-02-14 19:22:55.696497] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.288 [2024-02-14 19:22:55.696529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.288 [2024-02-14 19:22:55.696556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.548 [2024-02-14 19:22:55.708745] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.548 [2024-02-14 19:22:55.708779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.548 [2024-02-14 19:22:55.708806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.548 [2024-02-14 19:22:55.720254] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.548 [2024-02-14 19:22:55.720287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.548 [2024-02-14 19:22:55.720315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.548 [2024-02-14 19:22:55.728882] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.548 [2024-02-14 19:22:55.728914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.548 [2024-02-14 19:22:55.728940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.548 [2024-02-14 19:22:55.738535] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.548 [2024-02-14 19:22:55.738604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.548 [2024-02-14 19:22:55.738633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.548 [2024-02-14 19:22:55.748357] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.548 [2024-02-14 19:22:55.748389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.548 [2024-02-14 19:22:55.748417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.548 [2024-02-14 19:22:55.760289] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.548 [2024-02-14 19:22:55.760321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.548 [2024-02-14 19:22:55.760348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.548 [2024-02-14 19:22:55.769751] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.548 [2024-02-14 19:22:55.769783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.548 [2024-02-14 19:22:55.769810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.548 [2024-02-14 19:22:55.781463] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.548 [2024-02-14 19:22:55.781505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.549 [2024-02-14 19:22:55.781533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.549 [2024-02-14 19:22:55.794220] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.549 [2024-02-14 19:22:55.794252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.549 [2024-02-14 19:22:55.794280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.549 [2024-02-14 19:22:55.806266] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.549 [2024-02-14 19:22:55.806299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.549 [2024-02-14 19:22:55.806325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.549 [2024-02-14 19:22:55.814350] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.549 [2024-02-14 19:22:55.814383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.549 [2024-02-14 19:22:55.814410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.549 [2024-02-14 19:22:55.826523] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.549 [2024-02-14 19:22:55.826555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.549 [2024-02-14 19:22:55.826582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.549 [2024-02-14 19:22:55.838260] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.549 [2024-02-14 19:22:55.838292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.549 [2024-02-14 19:22:55.838319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.549 [2024-02-14 19:22:55.850228] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.549 [2024-02-14 19:22:55.850260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.549 [2024-02-14 19:22:55.850287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.549 [2024-02-14 19:22:55.862883] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.549 [2024-02-14 19:22:55.862932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.549 [2024-02-14 19:22:55.862959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.549 [2024-02-14 19:22:55.875105] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.549 [2024-02-14 19:22:55.875156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.549 [2024-02-14 19:22:55.875184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.549 [2024-02-14 19:22:55.883968] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.549 [2024-02-14 19:22:55.884016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.549 [2024-02-14 19:22:55.884044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.549 [2024-02-14 19:22:55.896200] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.549 [2024-02-14 19:22:55.896248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.549 [2024-02-14 19:22:55.896275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.549 [2024-02-14 19:22:55.908424] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.549 [2024-02-14 19:22:55.908457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.549 [2024-02-14 19:22:55.908484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.549 [2024-02-14 19:22:55.920515] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.549 [2024-02-14 19:22:55.920588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.549 [2024-02-14 19:22:55.920617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.549 [2024-02-14 19:22:55.932333] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.549 [2024-02-14 19:22:55.932365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.549 [2024-02-14 19:22:55.932393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.549 [2024-02-14 19:22:55.944999] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.549 [2024-02-14 19:22:55.945030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.549 [2024-02-14 19:22:55.945057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.549 [2024-02-14 19:22:55.957041] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.549 [2024-02-14 19:22:55.957107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.549 [2024-02-14 19:22:55.957119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.809 [2024-02-14 19:22:55.966403] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.809 [2024-02-14 19:22:55.966434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.809 [2024-02-14 19:22:55.966461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.809 [2024-02-14 19:22:55.976348] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.809 [2024-02-14 19:22:55.976380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.809 [2024-02-14 19:22:55.976407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.809 [2024-02-14 19:22:55.987209] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.809 [2024-02-14 19:22:55.987241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.809 [2024-02-14 19:22:55.987268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.809 [2024-02-14 19:22:55.998786] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.809 [2024-02-14 19:22:55.998818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.809 [2024-02-14 19:22:55.998846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.809 [2024-02-14 19:22:56.011928] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.809 [2024-02-14 19:22:56.011960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.809 [2024-02-14 19:22:56.011988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.809 [2024-02-14 19:22:56.023188] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.810 [2024-02-14 19:22:56.023219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.810 [2024-02-14 19:22:56.023246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.810 [2024-02-14 19:22:56.033037] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.810 [2024-02-14 19:22:56.033069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.810 [2024-02-14 19:22:56.033096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.810 [2024-02-14 19:22:56.045162] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.810 [2024-02-14 19:22:56.045194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.810 [2024-02-14 19:22:56.045221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.810 [2024-02-14 19:22:56.054944] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.810 [2024-02-14 19:22:56.054992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.810 [2024-02-14 19:22:56.055020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.810 [2024-02-14 19:22:56.063874] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.810 [2024-02-14 19:22:56.063906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.810 [2024-02-14 19:22:56.063932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.810 [2024-02-14 19:22:56.074242] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.810 [2024-02-14 19:22:56.074276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.810 [2024-02-14 19:22:56.074304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.810 [2024-02-14 19:22:56.083858] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.810 [2024-02-14 19:22:56.083890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.810 [2024-02-14 19:22:56.083918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.810 [2024-02-14 19:22:56.094626] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.810 [2024-02-14 19:22:56.094657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.810 [2024-02-14 19:22:56.094684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.810 [2024-02-14 19:22:56.104930] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.810 [2024-02-14 19:22:56.104992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.810 [2024-02-14 19:22:56.105019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.810 [2024-02-14 19:22:56.117191] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.810 [2024-02-14 19:22:56.117223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.810 [2024-02-14 19:22:56.117249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.810 [2024-02-14 19:22:56.128518] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.810 [2024-02-14 19:22:56.128550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.810 [2024-02-14 19:22:56.128576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.810 [2024-02-14 19:22:56.136786] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.810 [2024-02-14 19:22:56.136818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.810 [2024-02-14 19:22:56.136845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.810 [2024-02-14 19:22:56.146353] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.810 [2024-02-14 19:22:56.146384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.810 [2024-02-14 19:22:56.146411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.810 [2024-02-14 19:22:56.155500] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.810 [2024-02-14 19:22:56.155542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.810 [2024-02-14 19:22:56.155570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.810 [2024-02-14 19:22:56.164172] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.810 [2024-02-14 19:22:56.164204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.810 [2024-02-14 19:22:56.164231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.810 [2024-02-14 19:22:56.173929] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.810 [2024-02-14 19:22:56.173977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.810 [2024-02-14 19:22:56.174004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.810 [2024-02-14 19:22:56.183995] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.810 [2024-02-14 19:22:56.184028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.810 [2024-02-14 19:22:56.184055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.810 [2024-02-14 19:22:56.193157] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.810 [2024-02-14 19:22:56.193190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.810 [2024-02-14 19:22:56.193217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.810 [2024-02-14 19:22:56.204193] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.810 [2024-02-14 19:22:56.204225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.810 [2024-02-14 19:22:56.204251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.810 [2024-02-14 19:22:56.215726] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:18.810 [2024-02-14 19:22:56.215757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.810 [2024-02-14 19:22:56.215783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.071 [2024-02-14 19:22:56.227610] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.071 [2024-02-14 19:22:56.227640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.071 [2024-02-14 19:22:56.227667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.071 [2024-02-14 19:22:56.238016] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.071 [2024-02-14 19:22:56.238065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.071 [2024-02-14 19:22:56.238092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.071 [2024-02-14 19:22:56.249704] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.071 [2024-02-14 19:22:56.249736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.071 [2024-02-14 19:22:56.249762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.071 [2024-02-14 19:22:56.257752] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.071 [2024-02-14 19:22:56.257783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.071 [2024-02-14 19:22:56.257809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.071 [2024-02-14 19:22:56.269938] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.071 [2024-02-14 19:22:56.269970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.071 [2024-02-14 19:22:56.269998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.071 [2024-02-14 19:22:56.282347] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.071 [2024-02-14 19:22:56.282380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.071 [2024-02-14 19:22:56.282408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.071 [2024-02-14 19:22:56.293828] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.071 [2024-02-14 19:22:56.293860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.071 [2024-02-14 19:22:56.293887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.071 [2024-02-14 19:22:56.303615] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.071 [2024-02-14 19:22:56.303647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.071 [2024-02-14 19:22:56.303673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.071 [2024-02-14 19:22:56.312776] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.071 [2024-02-14 19:22:56.312808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.071 [2024-02-14 19:22:56.312835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.071 [2024-02-14 19:22:56.322452] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.071 [2024-02-14 19:22:56.322510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.071 [2024-02-14 19:22:56.322539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.071 [2024-02-14 19:22:56.332589] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.071 [2024-02-14 19:22:56.332621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.071 [2024-02-14 19:22:56.332648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.071 [2024-02-14 19:22:56.342706] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.071 [2024-02-14 19:22:56.342755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.071 [2024-02-14 19:22:56.342782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.071 [2024-02-14 19:22:56.351005] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.071 [2024-02-14 19:22:56.351053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.071 [2024-02-14 19:22:56.351080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.071 [2024-02-14 19:22:56.362802] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.071 [2024-02-14 19:22:56.362850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.071 [2024-02-14 19:22:56.362901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.071 [2024-02-14 19:22:56.375380] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.071 [2024-02-14 19:22:56.375412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.071 [2024-02-14 19:22:56.375439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.071 [2024-02-14 19:22:56.386560] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.071 [2024-02-14 19:22:56.386609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.071 [2024-02-14 19:22:56.386637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.071 [2024-02-14 19:22:56.398144] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.071 [2024-02-14 19:22:56.398176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.071 [2024-02-14 19:22:56.398204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.071 [2024-02-14 19:22:56.411133] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.071 [2024-02-14 19:22:56.411197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.071 [2024-02-14 19:22:56.411225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.071 [2024-02-14 19:22:56.419283] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.071 [2024-02-14 19:22:56.419314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.071 [2024-02-14 19:22:56.419341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.071 [2024-02-14 19:22:56.431552] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.071 [2024-02-14 19:22:56.431582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.071 [2024-02-14 19:22:56.431609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.071 [2024-02-14 19:22:56.444195] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.071 [2024-02-14 19:22:56.444227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.071 [2024-02-14 19:22:56.444253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.072 [2024-02-14 19:22:56.454030] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.072 [2024-02-14 19:22:56.454062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.072 [2024-02-14 19:22:56.454089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.072 [2024-02-14 19:22:56.465694] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.072 [2024-02-14 19:22:56.465726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.072 [2024-02-14 19:22:56.465753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.072 [2024-02-14 19:22:56.475009] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.072 [2024-02-14 19:22:56.475059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.072 [2024-02-14 19:22:56.475087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.072 [2024-02-14 19:22:56.484256] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.072 [2024-02-14 19:22:56.484288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.072 [2024-02-14 19:22:56.484315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.332 [2024-02-14 19:22:56.495248] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.332 [2024-02-14 19:22:56.495315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.332 [2024-02-14 19:22:56.495326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.332 [2024-02-14 19:22:56.504302] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.332 [2024-02-14 19:22:56.504333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.332 [2024-02-14 19:22:56.504361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.332 [2024-02-14 19:22:56.514255] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.332 [2024-02-14 19:22:56.514287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.332 [2024-02-14 19:22:56.514314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.332 [2024-02-14 19:22:56.524811] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.332 [2024-02-14 19:22:56.524842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.332 [2024-02-14 19:22:56.524870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.332 [2024-02-14 19:22:56.534716] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.332 [2024-02-14 19:22:56.534766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.332 [2024-02-14 19:22:56.534793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.332 [2024-02-14 19:22:56.544766] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.332 [2024-02-14 19:22:56.544815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.332 [2024-02-14 19:22:56.544858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.332 [2024-02-14 19:22:56.558380] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.332 [2024-02-14 19:22:56.558427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.332 [2024-02-14 19:22:56.558456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.332 [2024-02-14 19:22:56.570155] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.332 [2024-02-14 19:22:56.570202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.332 [2024-02-14 19:22:56.570230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.332 [2024-02-14 19:22:56.582161] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.332 [2024-02-14 19:22:56.582209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.332 [2024-02-14 19:22:56.582237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.332 [2024-02-14 19:22:56.595440] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.332 [2024-02-14 19:22:56.595513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.332 [2024-02-14 19:22:56.595537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.332 [2024-02-14 19:22:56.604953] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.332 [2024-02-14 19:22:56.605001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.332 [2024-02-14 19:22:56.605028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.332 [2024-02-14 19:22:56.616635] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.332 [2024-02-14 19:22:56.616682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.332 [2024-02-14 19:22:56.616710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.332 [2024-02-14 19:22:56.629084] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.332 [2024-02-14 19:22:56.629134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.332 [2024-02-14 19:22:56.629162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.332 [2024-02-14 19:22:56.637556] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.332 [2024-02-14 19:22:56.637612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.332 [2024-02-14 19:22:56.637640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.332 [2024-02-14 19:22:56.649763] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.332 [2024-02-14 19:22:56.649812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.332 [2024-02-14 19:22:56.649839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.332 [2024-02-14 19:22:56.661834] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.332 [2024-02-14 19:22:56.661881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.332 [2024-02-14 19:22:56.661909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.332 [2024-02-14 19:22:56.673756] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.332 [2024-02-14 19:22:56.673803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.332 [2024-02-14 19:22:56.673830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.332 [2024-02-14 19:22:56.685453] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.332 [2024-02-14 19:22:56.685524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.332 [2024-02-14 19:22:56.685536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.332 [2024-02-14 19:22:56.697958] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.333 [2024-02-14 19:22:56.698007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.333 [2024-02-14 19:22:56.698034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.333 [2024-02-14 19:22:56.709165] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.333 [2024-02-14 19:22:56.709214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.333 [2024-02-14 19:22:56.709242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.333 [2024-02-14 19:22:56.721617] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.333 [2024-02-14 19:22:56.721666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.333 [2024-02-14 19:22:56.721693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.333 [2024-02-14 19:22:56.733364] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.333 [2024-02-14 19:22:56.733413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.333 [2024-02-14 19:22:56.733441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.333 [2024-02-14 19:22:56.745528] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.333 [2024-02-14 19:22:56.745592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.333 [2024-02-14 19:22:56.745621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.593 [2024-02-14 19:22:56.754752] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.593 [2024-02-14 19:22:56.754784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.593 [2024-02-14 19:22:56.754811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.593 [2024-02-14 19:22:56.764084] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.593 [2024-02-14 19:22:56.764116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.593 [2024-02-14 19:22:56.764142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.593 [2024-02-14 19:22:56.772969] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.593 [2024-02-14 19:22:56.773001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.593 [2024-02-14 19:22:56.773028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.593 [2024-02-14 19:22:56.784167] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.593 [2024-02-14 19:22:56.784200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.593 [2024-02-14 19:22:56.784227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.593 [2024-02-14 19:22:56.794290] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.593 [2024-02-14 19:22:56.794322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.593 [2024-02-14 19:22:56.794348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.593 [2024-02-14 19:22:56.803233] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.593 [2024-02-14 19:22:56.803284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.593 [2024-02-14 19:22:56.803328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.593 [2024-02-14 19:22:56.813891] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.593 [2024-02-14 19:22:56.813922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.593 [2024-02-14 19:22:56.813949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.593 [2024-02-14 19:22:56.826286] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.593 [2024-02-14 19:22:56.826317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.593 [2024-02-14 19:22:56.826344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.593 [2024-02-14 19:22:56.837727] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.593 [2024-02-14 19:22:56.837759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.593 [2024-02-14 19:22:56.837786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.593 [2024-02-14 19:22:56.846720] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.593 [2024-02-14 19:22:56.846751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.593 [2024-02-14 19:22:56.846778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.593 [2024-02-14 19:22:56.858314] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.593 [2024-02-14 19:22:56.858346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.593 [2024-02-14 19:22:56.858372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.593 [2024-02-14 19:22:56.868118] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.593 [2024-02-14 19:22:56.868150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.593 [2024-02-14 19:22:56.868176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.593 [2024-02-14 19:22:56.877500] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.593 [2024-02-14 19:22:56.877530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.593 [2024-02-14 19:22:56.877556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.593 [2024-02-14 19:22:56.887158] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.593 [2024-02-14 19:22:56.887223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.593 [2024-02-14 19:22:56.887251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.593 [2024-02-14 19:22:56.897369] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.593 [2024-02-14 19:22:56.897401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.593 [2024-02-14 19:22:56.897428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.593 [2024-02-14 19:22:56.908351] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.593 [2024-02-14 19:22:56.908400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.593 [2024-02-14 19:22:56.908427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.593 [2024-02-14 19:22:56.918405] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.594 [2024-02-14 19:22:56.918455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.594 [2024-02-14 19:22:56.918484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.594 [2024-02-14 19:22:56.929550] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.594 [2024-02-14 19:22:56.929597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.594 [2024-02-14 19:22:56.929625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.594 [2024-02-14 19:22:56.938975] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.594 [2024-02-14 19:22:56.939023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.594 [2024-02-14 19:22:56.939051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.594 [2024-02-14 19:22:56.948584] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.594 [2024-02-14 19:22:56.948615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.594 [2024-02-14 19:22:56.948642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.594 [2024-02-14 19:22:56.957143] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.594 [2024-02-14 19:22:56.957175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.594 [2024-02-14 19:22:56.957203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.594 [2024-02-14 19:22:56.966207] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.594 [2024-02-14 19:22:56.966240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.594 [2024-02-14 19:22:56.966267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.594 [2024-02-14 19:22:56.977163] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.594 [2024-02-14 19:22:56.977195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.594 [2024-02-14 19:22:56.977221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.594 [2024-02-14 19:22:56.986247] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.594 [2024-02-14 19:22:56.986279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.594 [2024-02-14 19:22:56.986306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.594 [2024-02-14 19:22:56.996075] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.594 [2024-02-14 19:22:56.996106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.594 [2024-02-14 19:22:56.996132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.594 [2024-02-14 19:22:57.006190] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.594 [2024-02-14 19:22:57.006221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.594 [2024-02-14 19:22:57.006248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.854 [2024-02-14 19:22:57.018984] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.854 [2024-02-14 19:22:57.019034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-02-14 19:22:57.019061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.854 [2024-02-14 19:22:57.031439] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.854 [2024-02-14 19:22:57.031471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-02-14 19:22:57.031499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.854 [2024-02-14 19:22:57.040289] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.854 [2024-02-14 19:22:57.040337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-02-14 19:22:57.040364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.854 [2024-02-14 19:22:57.051706] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.854 [2024-02-14 19:22:57.051754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-02-14 19:22:57.051766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.854 [2024-02-14 19:22:57.063672] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.854 [2024-02-14 19:22:57.063719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-02-14 19:22:57.063747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.854 [2024-02-14 19:22:57.076316] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.854 [2024-02-14 19:22:57.076348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-02-14 19:22:57.076375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.854 [2024-02-14 19:22:57.088383] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.854 [2024-02-14 19:22:57.088415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-02-14 19:22:57.088442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.854 [2024-02-14 19:22:57.096943] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.854 [2024-02-14 19:22:57.096975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-02-14 19:22:57.097001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.854 [2024-02-14 19:22:57.108695] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.854 [2024-02-14 19:22:57.108726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-02-14 19:22:57.108753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.854 [2024-02-14 19:22:57.119109] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.854 [2024-02-14 19:22:57.119160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-02-14 19:22:57.119187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.854 [2024-02-14 19:22:57.128242] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.854 [2024-02-14 19:22:57.128274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.854 [2024-02-14 19:22:57.128301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.855 [2024-02-14 19:22:57.137321] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.855 [2024-02-14 19:22:57.137352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-02-14 19:22:57.137378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.855 [2024-02-14 19:22:57.149439] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.855 [2024-02-14 19:22:57.149471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-02-14 19:22:57.149498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.855 [2024-02-14 19:22:57.161214] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.855 [2024-02-14 19:22:57.161245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-02-14 19:22:57.161272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.855 [2024-02-14 19:22:57.172618] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.855 [2024-02-14 19:22:57.172649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-02-14 19:22:57.172676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.855 [2024-02-14 19:22:57.182072] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.855 [2024-02-14 19:22:57.182103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-02-14 19:22:57.182131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.855 [2024-02-14 19:22:57.193831] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.855 [2024-02-14 19:22:57.193862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-02-14 19:22:57.193889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.855 [2024-02-14 19:22:57.205834] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.855 [2024-02-14 19:22:57.205866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-02-14 19:22:57.205892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.855 [2024-02-14 19:22:57.217277] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.855 [2024-02-14 19:22:57.217309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-02-14 19:22:57.217335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.855 [2024-02-14 19:22:57.227853] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.855 [2024-02-14 19:22:57.227902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-02-14 19:22:57.227943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.855 [2024-02-14 19:22:57.236538] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.855 [2024-02-14 19:22:57.236569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-02-14 19:22:57.236596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.855 [2024-02-14 19:22:57.247242] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.855 [2024-02-14 19:22:57.247274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-02-14 19:22:57.247301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.855 [2024-02-14 19:22:57.256372] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.855 [2024-02-14 19:22:57.256404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-02-14 19:22:57.256430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.855 [2024-02-14 19:22:57.266691] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:19.855 [2024-02-14 19:22:57.266755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.855 [2024-02-14 19:22:57.266781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.115 [2024-02-14 19:22:57.277299] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.115 [2024-02-14 19:22:57.277331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.115 [2024-02-14 19:22:57.277358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.115 [2024-02-14 19:22:57.286794] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.115 [2024-02-14 19:22:57.286842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.115 [2024-02-14 19:22:57.286892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.115 [2024-02-14 19:22:57.298622] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.115 [2024-02-14 19:22:57.298653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.115 [2024-02-14 19:22:57.298679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.115 [2024-02-14 19:22:57.308901] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.115 [2024-02-14 19:22:57.308933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.115 [2024-02-14 19:22:57.308960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.115 [2024-02-14 19:22:57.317767] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.115 [2024-02-14 19:22:57.317798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.115 [2024-02-14 19:22:57.317824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.115 [2024-02-14 19:22:57.329706] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.115 [2024-02-14 19:22:57.329738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.115 [2024-02-14 19:22:57.329765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.115 [2024-02-14 19:22:57.338114] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.115 [2024-02-14 19:22:57.338145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.116 [2024-02-14 19:22:57.338173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.116 [2024-02-14 19:22:57.349166] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.116 [2024-02-14 19:22:57.349198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.116 [2024-02-14 19:22:57.349225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.116 [2024-02-14 19:22:57.358550] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.116 [2024-02-14 19:22:57.358581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.116 [2024-02-14 19:22:57.358607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.116 [2024-02-14 19:22:57.369991] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.116 [2024-02-14 19:22:57.370023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.116 [2024-02-14 19:22:57.370050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.116 [2024-02-14 19:22:57.379085] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.116 [2024-02-14 19:22:57.379135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.116 [2024-02-14 19:22:57.379163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.116 [2024-02-14 19:22:57.391405] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.116 [2024-02-14 19:22:57.391436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.116 [2024-02-14 19:22:57.391463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.116 [2024-02-14 19:22:57.402994] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.116 [2024-02-14 19:22:57.403046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.116 [2024-02-14 19:22:57.403075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.116 [2024-02-14 19:22:57.416079] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.116 [2024-02-14 19:22:57.416110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.116 [2024-02-14 19:22:57.416137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.116 [2024-02-14 19:22:57.427933] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.116 [2024-02-14 19:22:57.427966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.116 [2024-02-14 19:22:57.427994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.116 [2024-02-14 19:22:57.439736] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.116 [2024-02-14 19:22:57.439768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.116 [2024-02-14 19:22:57.439796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.116 [2024-02-14 19:22:57.448495] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.116 [2024-02-14 19:22:57.448525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.116 [2024-02-14 19:22:57.448551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.116 [2024-02-14 19:22:57.459587] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.116 [2024-02-14 19:22:57.459619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.116 [2024-02-14 19:22:57.459646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.116 [2024-02-14 19:22:57.470101] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.116 [2024-02-14 19:22:57.470133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.116 [2024-02-14 19:22:57.470160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.116 [2024-02-14 19:22:57.479469] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.116 [2024-02-14 19:22:57.479509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.116 [2024-02-14 19:22:57.479537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.116 [2024-02-14 19:22:57.489610] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.116 [2024-02-14 19:22:57.489656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.116 [2024-02-14 19:22:57.489684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.116 [2024-02-14 19:22:57.500007] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.116 [2024-02-14 19:22:57.500038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.116 [2024-02-14 19:22:57.500065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.116 [2024-02-14 19:22:57.510144] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.116 [2024-02-14 19:22:57.510176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.116 [2024-02-14 19:22:57.510203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.116 [2024-02-14 19:22:57.518588] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.116 [2024-02-14 19:22:57.518619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.116 [2024-02-14 19:22:57.518646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.116 [2024-02-14 19:22:57.527861] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.116 [2024-02-14 19:22:57.527911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.116 [2024-02-14 19:22:57.527952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.376 [2024-02-14 19:22:57.537811] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.376 [2024-02-14 19:22:57.537861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.376 [2024-02-14 19:22:57.537904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.376 [2024-02-14 19:22:57.546876] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.376 [2024-02-14 19:22:57.546941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.376 [2024-02-14 19:22:57.546968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.376 [2024-02-14 19:22:57.556954] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.376 [2024-02-14 19:22:57.556987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.376 [2024-02-14 19:22:57.557013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.376 [2024-02-14 19:22:57.569095] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.376 [2024-02-14 19:22:57.569127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.376 [2024-02-14 19:22:57.569153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.376 [2024-02-14 19:22:57.579715] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.376 [2024-02-14 19:22:57.579746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.376 [2024-02-14 19:22:57.579773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.376 [2024-02-14 19:22:57.589106] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.376 [2024-02-14 19:22:57.589153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.376 [2024-02-14 19:22:57.589180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.376 [2024-02-14 19:22:57.598559] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.376 [2024-02-14 19:22:57.598590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.376 [2024-02-14 19:22:57.598617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.376 [2024-02-14 19:22:57.609508] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.376 [2024-02-14 19:22:57.609555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.376 [2024-02-14 19:22:57.609582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.376 [2024-02-14 19:22:57.619507] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.376 [2024-02-14 19:22:57.619547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.376 [2024-02-14 19:22:57.619574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.376 [2024-02-14 19:22:57.630972] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.376 [2024-02-14 19:22:57.631022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.377 [2024-02-14 19:22:57.631050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.377 [2024-02-14 19:22:57.640125] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.377 [2024-02-14 19:22:57.640158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.377 [2024-02-14 19:22:57.640184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.377 [2024-02-14 19:22:57.650687] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.377 [2024-02-14 19:22:57.650719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:46 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.377 [2024-02-14 19:22:57.650746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.377 [2024-02-14 19:22:57.660619] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10d8570) 00:22:20.377 [2024-02-14 19:22:57.660650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.377 [2024-02-14 19:22:57.660677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.377 00:22:20.377 Latency(us) 00:22:20.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:20.377 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:20.377 nvme0n1 : 2.00 23807.78 93.00 0.00 0.00 5370.79 2353.34 17873.45 00:22:20.377 =================================================================================================================== 00:22:20.377 Total : 23807.78 93.00 0.00 0.00 5370.79 2353.34 17873.45 00:22:20.377 0 00:22:20.377 19:22:57 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:20.377 19:22:57 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:20.377 19:22:57 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:20.377 | .driver_specific 00:22:20.377 | .nvme_error 00:22:20.377 | .status_code 00:22:20.377 | .command_transient_transport_error' 00:22:20.377 19:22:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:20.637 19:22:57 -- host/digest.sh@71 -- # (( 186 > 0 )) 00:22:20.637 19:22:57 -- host/digest.sh@73 -- # killprocess 85342 00:22:20.637 19:22:57 -- common/autotest_common.sh@924 -- # '[' -z 85342 ']' 00:22:20.637 19:22:57 -- common/autotest_common.sh@928 -- # kill -0 85342 00:22:20.637 19:22:57 -- common/autotest_common.sh@929 -- # uname 00:22:20.637 19:22:57 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:22:20.637 19:22:57 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 85342 00:22:20.637 19:22:57 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:22:20.637 killing process with pid 85342 00:22:20.637 19:22:57 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:22:20.637 19:22:57 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 85342' 00:22:20.637 Received shutdown signal, test time was about 2.000000 seconds 00:22:20.637 00:22:20.637 Latency(us) 00:22:20.637 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:20.637 =================================================================================================================== 00:22:20.637 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:20.637 19:22:57 -- common/autotest_common.sh@943 -- # kill 85342 00:22:20.637 19:22:57 -- common/autotest_common.sh@948 -- # wait 85342 00:22:20.903 19:22:58 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:22:20.903 19:22:58 -- host/digest.sh@54 -- # local rw bs qd 00:22:20.903 19:22:58 -- host/digest.sh@56 -- # rw=randread 00:22:20.903 19:22:58 -- host/digest.sh@56 -- # bs=131072 00:22:20.903 19:22:58 -- host/digest.sh@56 -- # qd=16 00:22:20.903 19:22:58 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:22:20.903 19:22:58 -- host/digest.sh@58 -- # bperfpid=85431 00:22:20.903 19:22:58 -- host/digest.sh@60 -- # waitforlisten 85431 /var/tmp/bperf.sock 00:22:20.903 19:22:58 -- common/autotest_common.sh@817 -- # '[' -z 85431 ']' 00:22:20.903 19:22:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:20.903 19:22:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:20.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:20.903 19:22:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:20.903 19:22:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:20.903 19:22:58 -- common/autotest_common.sh@10 -- # set +x 00:22:20.903 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:20.903 Zero copy mechanism will not be used. 00:22:20.903 [2024-02-14 19:22:58.254294] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:22:20.903 [2024-02-14 19:22:58.254396] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85431 ] 00:22:21.202 [2024-02-14 19:22:58.384947] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.202 [2024-02-14 19:22:58.463642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.149 19:22:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:22.149 19:22:59 -- common/autotest_common.sh@850 -- # return 0 00:22:22.149 19:22:59 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:22.149 19:22:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:22.149 19:22:59 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:22.149 19:22:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.149 19:22:59 -- common/autotest_common.sh@10 -- # set +x 00:22:22.149 19:22:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.149 19:22:59 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:22.149 19:22:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:22.409 nvme0n1 00:22:22.409 19:22:59 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:22.409 19:22:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.409 19:22:59 -- common/autotest_common.sh@10 -- # set +x 00:22:22.409 19:22:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.409 19:22:59 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:22.409 19:22:59 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:22.409 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:22.409 Zero copy mechanism will not be used. 00:22:22.409 Running I/O for 2 seconds... 00:22:22.409 [2024-02-14 19:22:59.737777] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.409 [2024-02-14 19:22:59.737821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.409 [2024-02-14 19:22:59.737850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.409 [2024-02-14 19:22:59.741366] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.409 [2024-02-14 19:22:59.741430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.409 [2024-02-14 19:22:59.741457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.409 [2024-02-14 19:22:59.745531] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.409 [2024-02-14 19:22:59.745577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.409 [2024-02-14 19:22:59.745589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.409 [2024-02-14 19:22:59.748985] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.409 [2024-02-14 19:22:59.749017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.409 [2024-02-14 19:22:59.749044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.409 [2024-02-14 19:22:59.752338] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.409 [2024-02-14 19:22:59.752370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.409 [2024-02-14 19:22:59.752397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.409 [2024-02-14 19:22:59.756836] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.409 [2024-02-14 19:22:59.756870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.409 [2024-02-14 19:22:59.756897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.409 [2024-02-14 19:22:59.760385] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.409 [2024-02-14 19:22:59.760417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.409 [2024-02-14 19:22:59.760444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.409 [2024-02-14 19:22:59.764338] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.409 [2024-02-14 19:22:59.764371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.409 [2024-02-14 19:22:59.764399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.409 [2024-02-14 19:22:59.767485] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.409 [2024-02-14 19:22:59.767541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.409 [2024-02-14 19:22:59.767569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.409 [2024-02-14 19:22:59.771490] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.409 [2024-02-14 19:22:59.771531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.409 [2024-02-14 19:22:59.771558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.409 [2024-02-14 19:22:59.775407] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.409 [2024-02-14 19:22:59.775438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.409 [2024-02-14 19:22:59.775465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.409 [2024-02-14 19:22:59.778930] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.409 [2024-02-14 19:22:59.778979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.409 [2024-02-14 19:22:59.778991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.409 [2024-02-14 19:22:59.782473] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.409 [2024-02-14 19:22:59.782513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.409 [2024-02-14 19:22:59.782541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.409 [2024-02-14 19:22:59.786191] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.409 [2024-02-14 19:22:59.786224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.409 [2024-02-14 19:22:59.786252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.409 [2024-02-14 19:22:59.789556] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.409 [2024-02-14 19:22:59.789588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.409 [2024-02-14 19:22:59.789615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.409 [2024-02-14 19:22:59.793055] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.410 [2024-02-14 19:22:59.793087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.410 [2024-02-14 19:22:59.793114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.410 [2024-02-14 19:22:59.796468] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.410 [2024-02-14 19:22:59.796509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.410 [2024-02-14 19:22:59.796536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.410 [2024-02-14 19:22:59.800298] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.410 [2024-02-14 19:22:59.800345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.410 [2024-02-14 19:22:59.800372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.410 [2024-02-14 19:22:59.804589] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.410 [2024-02-14 19:22:59.804621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.410 [2024-02-14 19:22:59.804648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.410 [2024-02-14 19:22:59.808107] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.410 [2024-02-14 19:22:59.808138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.410 [2024-02-14 19:22:59.808166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.410 [2024-02-14 19:22:59.811643] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.410 [2024-02-14 19:22:59.811674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.410 [2024-02-14 19:22:59.811701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.410 [2024-02-14 19:22:59.815417] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.410 [2024-02-14 19:22:59.815448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.410 [2024-02-14 19:22:59.815474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.410 [2024-02-14 19:22:59.819132] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.410 [2024-02-14 19:22:59.819180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.410 [2024-02-14 19:22:59.819208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.410 [2024-02-14 19:22:59.823406] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.410 [2024-02-14 19:22:59.823469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.410 [2024-02-14 19:22:59.823514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.671 [2024-02-14 19:22:59.827940] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.671 [2024-02-14 19:22:59.827972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.671 [2024-02-14 19:22:59.828000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.671 [2024-02-14 19:22:59.831913] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.671 [2024-02-14 19:22:59.831945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.671 [2024-02-14 19:22:59.831972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.671 [2024-02-14 19:22:59.835276] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.671 [2024-02-14 19:22:59.835337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.671 [2024-02-14 19:22:59.835364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.671 [2024-02-14 19:22:59.839271] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.671 [2024-02-14 19:22:59.839320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.671 [2024-02-14 19:22:59.839361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.671 [2024-02-14 19:22:59.842988] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.671 [2024-02-14 19:22:59.843040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.671 [2024-02-14 19:22:59.843069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.671 [2024-02-14 19:22:59.846683] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.671 [2024-02-14 19:22:59.846730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.671 [2024-02-14 19:22:59.846757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.671 [2024-02-14 19:22:59.850611] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.671 [2024-02-14 19:22:59.850661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.671 [2024-02-14 19:22:59.850704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.671 [2024-02-14 19:22:59.854016] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.671 [2024-02-14 19:22:59.854063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.672 [2024-02-14 19:22:59.854091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.672 [2024-02-14 19:22:59.857724] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.672 [2024-02-14 19:22:59.857773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.672 [2024-02-14 19:22:59.857800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.672 [2024-02-14 19:22:59.861380] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.672 [2024-02-14 19:22:59.861411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.672 [2024-02-14 19:22:59.861438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.672 [2024-02-14 19:22:59.864967] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.672 [2024-02-14 19:22:59.864998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.672 [2024-02-14 19:22:59.865025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.672 [2024-02-14 19:22:59.868462] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.672 [2024-02-14 19:22:59.868507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.672 [2024-02-14 19:22:59.868535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.672 [2024-02-14 19:22:59.871865] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.672 [2024-02-14 19:22:59.871896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.672 [2024-02-14 19:22:59.871923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.672 [2024-02-14 19:22:59.875571] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.672 [2024-02-14 19:22:59.875602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.672 [2024-02-14 19:22:59.875628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.672 [2024-02-14 19:22:59.878860] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.672 [2024-02-14 19:22:59.878913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.672 [2024-02-14 19:22:59.878941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.672 [2024-02-14 19:22:59.882269] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.672 [2024-02-14 19:22:59.882301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.672 [2024-02-14 19:22:59.882328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.672 [2024-02-14 19:22:59.885835] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.672 [2024-02-14 19:22:59.885866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.672 [2024-02-14 19:22:59.885893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.672 [2024-02-14 19:22:59.889333] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.672 [2024-02-14 19:22:59.889364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.672 [2024-02-14 19:22:59.889391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.672 [2024-02-14 19:22:59.893228] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.672 [2024-02-14 19:22:59.893259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.672 [2024-02-14 19:22:59.893286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.672 [2024-02-14 19:22:59.896744] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.672 [2024-02-14 19:22:59.896776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.672 [2024-02-14 19:22:59.896804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.672 [2024-02-14 19:22:59.900456] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.672 [2024-02-14 19:22:59.900512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.672 [2024-02-14 19:22:59.900525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.672 [2024-02-14 19:22:59.903989] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.672 [2024-02-14 19:22:59.904020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.672 [2024-02-14 19:22:59.904047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.672 [2024-02-14 19:22:59.907453] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.672 [2024-02-14 19:22:59.907524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.672 [2024-02-14 19:22:59.907553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.672 [2024-02-14 19:22:59.911459] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.672 [2024-02-14 19:22:59.911531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.672 [2024-02-14 19:22:59.911559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.672 [2024-02-14 19:22:59.914975] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.672 [2024-02-14 19:22:59.915025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.672 [2024-02-14 19:22:59.915054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.672 [2024-02-14 19:22:59.919615] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.672 [2024-02-14 19:22:59.919665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.672 [2024-02-14 19:22:59.919693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.672 [2024-02-14 19:22:59.924570] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.672 [2024-02-14 19:22:59.924633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.672 [2024-02-14 19:22:59.924647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.672 [2024-02-14 19:22:59.928810] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.672 [2024-02-14 19:22:59.928842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.672 [2024-02-14 19:22:59.928869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.672 [2024-02-14 19:22:59.932683] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.672 [2024-02-14 19:22:59.932714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.672 [2024-02-14 19:22:59.932741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.672 [2024-02-14 19:22:59.936205] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.672 [2024-02-14 19:22:59.936236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.672 [2024-02-14 19:22:59.936263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.672 [2024-02-14 19:22:59.940013] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.672 [2024-02-14 19:22:59.940044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.672 [2024-02-14 19:22:59.940071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.672 [2024-02-14 19:22:59.942933] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.672 [2024-02-14 19:22:59.942980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.672 [2024-02-14 19:22:59.943008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.672 [2024-02-14 19:22:59.946542] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.673 [2024-02-14 19:22:59.946573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.673 [2024-02-14 19:22:59.946600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.673 [2024-02-14 19:22:59.950096] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.673 [2024-02-14 19:22:59.950128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.673 [2024-02-14 19:22:59.950155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.673 [2024-02-14 19:22:59.954162] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.673 [2024-02-14 19:22:59.954194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.673 [2024-02-14 19:22:59.954222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.673 [2024-02-14 19:22:59.957534] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.673 [2024-02-14 19:22:59.957587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.673 [2024-02-14 19:22:59.957599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.673 [2024-02-14 19:22:59.961425] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.673 [2024-02-14 19:22:59.961473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.673 [2024-02-14 19:22:59.961511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.673 [2024-02-14 19:22:59.965688] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.673 [2024-02-14 19:22:59.965739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.673 [2024-02-14 19:22:59.965752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.673 [2024-02-14 19:22:59.969322] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.673 [2024-02-14 19:22:59.969371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.673 [2024-02-14 19:22:59.969399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.673 [2024-02-14 19:22:59.973359] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.673 [2024-02-14 19:22:59.973411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.673 [2024-02-14 19:22:59.973436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.673 [2024-02-14 19:22:59.977475] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.673 [2024-02-14 19:22:59.977565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.673 [2024-02-14 19:22:59.977578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.673 [2024-02-14 19:22:59.981736] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.673 [2024-02-14 19:22:59.981788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.673 [2024-02-14 19:22:59.981801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.673 [2024-02-14 19:22:59.985728] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.673 [2024-02-14 19:22:59.985780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.673 [2024-02-14 19:22:59.985792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.673 [2024-02-14 19:22:59.989707] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.673 [2024-02-14 19:22:59.989759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.673 [2024-02-14 19:22:59.989771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.673 [2024-02-14 19:22:59.993577] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.673 [2024-02-14 19:22:59.993626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.673 [2024-02-14 19:22:59.993654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.673 [2024-02-14 19:22:59.997355] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.673 [2024-02-14 19:22:59.997404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.673 [2024-02-14 19:22:59.997431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.673 [2024-02-14 19:23:00.001685] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.673 [2024-02-14 19:23:00.001736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.673 [2024-02-14 19:23:00.001764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.673 [2024-02-14 19:23:00.005653] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.673 [2024-02-14 19:23:00.005702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.673 [2024-02-14 19:23:00.005730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.673 [2024-02-14 19:23:00.009708] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.673 [2024-02-14 19:23:00.009763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.673 [2024-02-14 19:23:00.009777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.673 [2024-02-14 19:23:00.013899] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.673 [2024-02-14 19:23:00.013963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.673 [2024-02-14 19:23:00.013990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.673 [2024-02-14 19:23:00.017748] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.673 [2024-02-14 19:23:00.017798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.673 [2024-02-14 19:23:00.017825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.673 [2024-02-14 19:23:00.023116] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.673 [2024-02-14 19:23:00.023155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.673 [2024-02-14 19:23:00.023169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.673 [2024-02-14 19:23:00.027750] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.673 [2024-02-14 19:23:00.027797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.673 [2024-02-14 19:23:00.027825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.673 [2024-02-14 19:23:00.032273] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.673 [2024-02-14 19:23:00.032322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.673 [2024-02-14 19:23:00.032349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.673 [2024-02-14 19:23:00.035977] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.673 [2024-02-14 19:23:00.036027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.673 [2024-02-14 19:23:00.036054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.673 [2024-02-14 19:23:00.039684] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.673 [2024-02-14 19:23:00.039733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.673 [2024-02-14 19:23:00.039762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.673 [2024-02-14 19:23:00.043539] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.673 [2024-02-14 19:23:00.043615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.674 [2024-02-14 19:23:00.043643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.674 [2024-02-14 19:23:00.047546] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.674 [2024-02-14 19:23:00.047605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.674 [2024-02-14 19:23:00.047648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.674 [2024-02-14 19:23:00.051125] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.674 [2024-02-14 19:23:00.051163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.674 [2024-02-14 19:23:00.051177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.674 [2024-02-14 19:23:00.055275] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.674 [2024-02-14 19:23:00.055312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.674 [2024-02-14 19:23:00.055326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.674 [2024-02-14 19:23:00.059539] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.674 [2024-02-14 19:23:00.059596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.674 [2024-02-14 19:23:00.059623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.674 [2024-02-14 19:23:00.063632] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.674 [2024-02-14 19:23:00.063680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.674 [2024-02-14 19:23:00.063708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.674 [2024-02-14 19:23:00.067544] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.674 [2024-02-14 19:23:00.067602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.674 [2024-02-14 19:23:00.067631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.674 [2024-02-14 19:23:00.071704] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.674 [2024-02-14 19:23:00.071750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.674 [2024-02-14 19:23:00.071777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.674 [2024-02-14 19:23:00.075804] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.674 [2024-02-14 19:23:00.075835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.674 [2024-02-14 19:23:00.075862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.674 [2024-02-14 19:23:00.079979] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.674 [2024-02-14 19:23:00.080011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.674 [2024-02-14 19:23:00.080038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.674 [2024-02-14 19:23:00.084034] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.674 [2024-02-14 19:23:00.084084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.674 [2024-02-14 19:23:00.084112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.935 [2024-02-14 19:23:00.088109] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.935 [2024-02-14 19:23:00.088159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.935 [2024-02-14 19:23:00.088187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.935 [2024-02-14 19:23:00.092254] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.935 [2024-02-14 19:23:00.092302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.935 [2024-02-14 19:23:00.092329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.935 [2024-02-14 19:23:00.096224] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.935 [2024-02-14 19:23:00.096257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.935 [2024-02-14 19:23:00.096284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.935 [2024-02-14 19:23:00.100307] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.935 [2024-02-14 19:23:00.100339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.935 [2024-02-14 19:23:00.100367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.935 [2024-02-14 19:23:00.103608] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.935 [2024-02-14 19:23:00.103655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.935 [2024-02-14 19:23:00.103682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.935 [2024-02-14 19:23:00.107411] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.935 [2024-02-14 19:23:00.107441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.935 [2024-02-14 19:23:00.107470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.935 [2024-02-14 19:23:00.110522] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.936 [2024-02-14 19:23:00.110565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.936 [2024-02-14 19:23:00.110594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.936 [2024-02-14 19:23:00.114606] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.936 [2024-02-14 19:23:00.114639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.936 [2024-02-14 19:23:00.114666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.936 [2024-02-14 19:23:00.118646] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.936 [2024-02-14 19:23:00.118680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.936 [2024-02-14 19:23:00.118709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.936 [2024-02-14 19:23:00.122809] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.936 [2024-02-14 19:23:00.122894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.936 [2024-02-14 19:23:00.122923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.936 [2024-02-14 19:23:00.126378] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.936 [2024-02-14 19:23:00.126410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.936 [2024-02-14 19:23:00.126437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.936 [2024-02-14 19:23:00.130400] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.936 [2024-02-14 19:23:00.130433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.936 [2024-02-14 19:23:00.130461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.936 [2024-02-14 19:23:00.134241] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.936 [2024-02-14 19:23:00.134275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.936 [2024-02-14 19:23:00.134303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.936 [2024-02-14 19:23:00.138232] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.936 [2024-02-14 19:23:00.138266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.936 [2024-02-14 19:23:00.138293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.936 [2024-02-14 19:23:00.142242] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.936 [2024-02-14 19:23:00.142276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.936 [2024-02-14 19:23:00.142303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.936 [2024-02-14 19:23:00.146032] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.936 [2024-02-14 19:23:00.146064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.936 [2024-02-14 19:23:00.146092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.936 [2024-02-14 19:23:00.149643] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.936 [2024-02-14 19:23:00.149676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.936 [2024-02-14 19:23:00.149703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.936 [2024-02-14 19:23:00.153730] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.936 [2024-02-14 19:23:00.153764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.936 [2024-02-14 19:23:00.153806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.936 [2024-02-14 19:23:00.157511] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.936 [2024-02-14 19:23:00.157542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.936 [2024-02-14 19:23:00.157569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.936 [2024-02-14 19:23:00.161641] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.936 [2024-02-14 19:23:00.161672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.936 [2024-02-14 19:23:00.161700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.936 [2024-02-14 19:23:00.165178] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.936 [2024-02-14 19:23:00.165210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.936 [2024-02-14 19:23:00.165236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.936 [2024-02-14 19:23:00.169252] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.936 [2024-02-14 19:23:00.169284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.936 [2024-02-14 19:23:00.169311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.936 [2024-02-14 19:23:00.172724] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.936 [2024-02-14 19:23:00.172756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.936 [2024-02-14 19:23:00.172783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.936 [2024-02-14 19:23:00.176685] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.936 [2024-02-14 19:23:00.176725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.936 [2024-02-14 19:23:00.176753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.936 [2024-02-14 19:23:00.180121] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.936 [2024-02-14 19:23:00.180153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.936 [2024-02-14 19:23:00.180180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.936 [2024-02-14 19:23:00.183575] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.936 [2024-02-14 19:23:00.183604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.936 [2024-02-14 19:23:00.183631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.936 [2024-02-14 19:23:00.186831] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.936 [2024-02-14 19:23:00.186863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.936 [2024-02-14 19:23:00.186914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.936 [2024-02-14 19:23:00.190424] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.936 [2024-02-14 19:23:00.190471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.936 [2024-02-14 19:23:00.190511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.936 [2024-02-14 19:23:00.194471] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.936 [2024-02-14 19:23:00.194531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.936 [2024-02-14 19:23:00.194560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.936 [2024-02-14 19:23:00.198606] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.936 [2024-02-14 19:23:00.198639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.936 [2024-02-14 19:23:00.198667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.936 [2024-02-14 19:23:00.201643] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.936 [2024-02-14 19:23:00.201693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.936 [2024-02-14 19:23:00.201721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.936 [2024-02-14 19:23:00.205661] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.937 [2024-02-14 19:23:00.205693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.937 [2024-02-14 19:23:00.205722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.937 [2024-02-14 19:23:00.209076] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.937 [2024-02-14 19:23:00.209107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.937 [2024-02-14 19:23:00.209134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.937 [2024-02-14 19:23:00.213193] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.937 [2024-02-14 19:23:00.213225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.937 [2024-02-14 19:23:00.213252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.937 [2024-02-14 19:23:00.217505] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.937 [2024-02-14 19:23:00.217535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.937 [2024-02-14 19:23:00.217561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.937 [2024-02-14 19:23:00.221366] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.937 [2024-02-14 19:23:00.221397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.937 [2024-02-14 19:23:00.221424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.937 [2024-02-14 19:23:00.224952] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.937 [2024-02-14 19:23:00.224982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.937 [2024-02-14 19:23:00.225010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.937 [2024-02-14 19:23:00.228385] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.937 [2024-02-14 19:23:00.228417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.937 [2024-02-14 19:23:00.228444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.937 [2024-02-14 19:23:00.232541] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.937 [2024-02-14 19:23:00.232581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.937 [2024-02-14 19:23:00.232611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.937 [2024-02-14 19:23:00.236854] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.937 [2024-02-14 19:23:00.236901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.937 [2024-02-14 19:23:00.236929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.937 [2024-02-14 19:23:00.240532] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.937 [2024-02-14 19:23:00.240565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.937 [2024-02-14 19:23:00.240592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.937 [2024-02-14 19:23:00.244112] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.937 [2024-02-14 19:23:00.244145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.937 [2024-02-14 19:23:00.244172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.937 [2024-02-14 19:23:00.247708] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.937 [2024-02-14 19:23:00.247757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.937 [2024-02-14 19:23:00.247784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.937 [2024-02-14 19:23:00.251715] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.937 [2024-02-14 19:23:00.251750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.937 [2024-02-14 19:23:00.251777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.937 [2024-02-14 19:23:00.255278] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.937 [2024-02-14 19:23:00.255325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.937 [2024-02-14 19:23:00.255352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.937 [2024-02-14 19:23:00.259010] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.937 [2024-02-14 19:23:00.259046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.937 [2024-02-14 19:23:00.259058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.937 [2024-02-14 19:23:00.262497] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.937 [2024-02-14 19:23:00.262527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.937 [2024-02-14 19:23:00.262555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.937 [2024-02-14 19:23:00.266022] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.937 [2024-02-14 19:23:00.266055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.937 [2024-02-14 19:23:00.266083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.937 [2024-02-14 19:23:00.269749] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.937 [2024-02-14 19:23:00.269783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.937 [2024-02-14 19:23:00.269810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.937 [2024-02-14 19:23:00.273447] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.937 [2024-02-14 19:23:00.273482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.937 [2024-02-14 19:23:00.273520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.937 [2024-02-14 19:23:00.277086] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.937 [2024-02-14 19:23:00.277119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.937 [2024-02-14 19:23:00.277146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.937 [2024-02-14 19:23:00.281050] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.937 [2024-02-14 19:23:00.281083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.937 [2024-02-14 19:23:00.281110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.937 [2024-02-14 19:23:00.285205] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.937 [2024-02-14 19:23:00.285236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.937 [2024-02-14 19:23:00.285265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.937 [2024-02-14 19:23:00.289346] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.937 [2024-02-14 19:23:00.289377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.937 [2024-02-14 19:23:00.289404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.937 [2024-02-14 19:23:00.293132] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.937 [2024-02-14 19:23:00.293165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.937 [2024-02-14 19:23:00.293192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.937 [2024-02-14 19:23:00.296631] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.937 [2024-02-14 19:23:00.296679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.937 [2024-02-14 19:23:00.296706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.938 [2024-02-14 19:23:00.300071] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.938 [2024-02-14 19:23:00.300104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.938 [2024-02-14 19:23:00.300132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.938 [2024-02-14 19:23:00.303772] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.938 [2024-02-14 19:23:00.303805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.938 [2024-02-14 19:23:00.303833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.938 [2024-02-14 19:23:00.307183] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.938 [2024-02-14 19:23:00.307217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.938 [2024-02-14 19:23:00.307245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.938 [2024-02-14 19:23:00.311066] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.938 [2024-02-14 19:23:00.311099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.938 [2024-02-14 19:23:00.311126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.938 [2024-02-14 19:23:00.314320] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.938 [2024-02-14 19:23:00.314352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.938 [2024-02-14 19:23:00.314379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.938 [2024-02-14 19:23:00.318591] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.938 [2024-02-14 19:23:00.318622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.938 [2024-02-14 19:23:00.318649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.938 [2024-02-14 19:23:00.322613] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.938 [2024-02-14 19:23:00.322643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.938 [2024-02-14 19:23:00.322670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.938 [2024-02-14 19:23:00.326527] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.938 [2024-02-14 19:23:00.326557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.938 [2024-02-14 19:23:00.326584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.938 [2024-02-14 19:23:00.330060] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.938 [2024-02-14 19:23:00.330091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.938 [2024-02-14 19:23:00.330118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.938 [2024-02-14 19:23:00.334183] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.938 [2024-02-14 19:23:00.334213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.938 [2024-02-14 19:23:00.334240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.938 [2024-02-14 19:23:00.338591] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.938 [2024-02-14 19:23:00.338622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.938 [2024-02-14 19:23:00.338650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.938 [2024-02-14 19:23:00.342263] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.938 [2024-02-14 19:23:00.342296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.938 [2024-02-14 19:23:00.342307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.938 [2024-02-14 19:23:00.345862] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.938 [2024-02-14 19:23:00.345896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.938 [2024-02-14 19:23:00.345924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.938 [2024-02-14 19:23:00.349970] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:22.938 [2024-02-14 19:23:00.350017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.938 [2024-02-14 19:23:00.350044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.199 [2024-02-14 19:23:00.353979] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.199 [2024-02-14 19:23:00.354011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.199 [2024-02-14 19:23:00.354023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.199 [2024-02-14 19:23:00.357943] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.199 [2024-02-14 19:23:00.357976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.199 [2024-02-14 19:23:00.358004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.199 [2024-02-14 19:23:00.361982] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.199 [2024-02-14 19:23:00.362015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.199 [2024-02-14 19:23:00.362026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.199 [2024-02-14 19:23:00.365663] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.199 [2024-02-14 19:23:00.365695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.199 [2024-02-14 19:23:00.365723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.199 [2024-02-14 19:23:00.369278] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.199 [2024-02-14 19:23:00.369310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.199 [2024-02-14 19:23:00.369321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.199 [2024-02-14 19:23:00.373374] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.199 [2024-02-14 19:23:00.373404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.199 [2024-02-14 19:23:00.373415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.199 [2024-02-14 19:23:00.376646] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.199 [2024-02-14 19:23:00.376677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.199 [2024-02-14 19:23:00.376704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.199 [2024-02-14 19:23:00.380540] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.199 [2024-02-14 19:23:00.380572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.199 [2024-02-14 19:23:00.380583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.199 [2024-02-14 19:23:00.383556] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.199 [2024-02-14 19:23:00.383589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.199 [2024-02-14 19:23:00.383600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.199 [2024-02-14 19:23:00.387721] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.200 [2024-02-14 19:23:00.387753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.200 [2024-02-14 19:23:00.387764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.200 [2024-02-14 19:23:00.390557] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.200 [2024-02-14 19:23:00.390587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.200 [2024-02-14 19:23:00.390614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.200 [2024-02-14 19:23:00.393889] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.200 [2024-02-14 19:23:00.393921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.200 [2024-02-14 19:23:00.393949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.200 [2024-02-14 19:23:00.397771] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.200 [2024-02-14 19:23:00.397803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.200 [2024-02-14 19:23:00.397831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.200 [2024-02-14 19:23:00.401778] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.200 [2024-02-14 19:23:00.401811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.200 [2024-02-14 19:23:00.401822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.200 [2024-02-14 19:23:00.405578] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.200 [2024-02-14 19:23:00.405615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.200 [2024-02-14 19:23:00.405643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.200 [2024-02-14 19:23:00.408515] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.200 [2024-02-14 19:23:00.408546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.200 [2024-02-14 19:23:00.408557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.200 [2024-02-14 19:23:00.411770] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.200 [2024-02-14 19:23:00.411801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.200 [2024-02-14 19:23:00.411828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.200 [2024-02-14 19:23:00.416114] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.200 [2024-02-14 19:23:00.416146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.200 [2024-02-14 19:23:00.416157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.200 [2024-02-14 19:23:00.419003] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.200 [2024-02-14 19:23:00.419034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.200 [2024-02-14 19:23:00.419045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.200 [2024-02-14 19:23:00.422811] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.200 [2024-02-14 19:23:00.422842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.200 [2024-02-14 19:23:00.422854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.200 [2024-02-14 19:23:00.426880] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.200 [2024-02-14 19:23:00.426927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.200 [2024-02-14 19:23:00.426954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.200 [2024-02-14 19:23:00.430602] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.200 [2024-02-14 19:23:00.430634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.200 [2024-02-14 19:23:00.430645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.200 [2024-02-14 19:23:00.433446] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.200 [2024-02-14 19:23:00.433477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.200 [2024-02-14 19:23:00.433500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.200 [2024-02-14 19:23:00.436816] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.200 [2024-02-14 19:23:00.436849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.200 [2024-02-14 19:23:00.436860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.200 [2024-02-14 19:23:00.440304] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.200 [2024-02-14 19:23:00.440335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.200 [2024-02-14 19:23:00.440346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.200 [2024-02-14 19:23:00.443979] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.200 [2024-02-14 19:23:00.444011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.200 [2024-02-14 19:23:00.444022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.200 [2024-02-14 19:23:00.447099] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.200 [2024-02-14 19:23:00.447133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.200 [2024-02-14 19:23:00.447160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.200 [2024-02-14 19:23:00.450957] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.200 [2024-02-14 19:23:00.450989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.200 [2024-02-14 19:23:00.451016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.200 [2024-02-14 19:23:00.454361] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.200 [2024-02-14 19:23:00.454393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.200 [2024-02-14 19:23:00.454404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.200 [2024-02-14 19:23:00.457783] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.200 [2024-02-14 19:23:00.457815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.200 [2024-02-14 19:23:00.457827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.200 [2024-02-14 19:23:00.461633] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.200 [2024-02-14 19:23:00.461664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.200 [2024-02-14 19:23:00.461675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.200 [2024-02-14 19:23:00.465736] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.200 [2024-02-14 19:23:00.465766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.200 [2024-02-14 19:23:00.465793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.200 [2024-02-14 19:23:00.469314] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.200 [2024-02-14 19:23:00.469345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.200 [2024-02-14 19:23:00.469357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.200 [2024-02-14 19:23:00.472481] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.200 [2024-02-14 19:23:00.472521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.200 [2024-02-14 19:23:00.472533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.201 [2024-02-14 19:23:00.476067] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.201 [2024-02-14 19:23:00.476098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.201 [2024-02-14 19:23:00.476109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.201 [2024-02-14 19:23:00.479388] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.201 [2024-02-14 19:23:00.479419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.201 [2024-02-14 19:23:00.479430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.201 [2024-02-14 19:23:00.483235] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.201 [2024-02-14 19:23:00.483266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.201 [2024-02-14 19:23:00.483293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.201 [2024-02-14 19:23:00.486715] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.201 [2024-02-14 19:23:00.486746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.201 [2024-02-14 19:23:00.486773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.201 [2024-02-14 19:23:00.490258] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.201 [2024-02-14 19:23:00.490290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.201 [2024-02-14 19:23:00.490301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.201 [2024-02-14 19:23:00.494110] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.201 [2024-02-14 19:23:00.494141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.201 [2024-02-14 19:23:00.494152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.201 [2024-02-14 19:23:00.497650] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.201 [2024-02-14 19:23:00.497688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.201 [2024-02-14 19:23:00.497700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.201 [2024-02-14 19:23:00.501329] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.201 [2024-02-14 19:23:00.501362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.201 [2024-02-14 19:23:00.501373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.201 [2024-02-14 19:23:00.504932] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.201 [2024-02-14 19:23:00.504963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.201 [2024-02-14 19:23:00.504975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.201 [2024-02-14 19:23:00.508173] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.201 [2024-02-14 19:23:00.508204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.201 [2024-02-14 19:23:00.508215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.201 [2024-02-14 19:23:00.512136] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.201 [2024-02-14 19:23:00.512167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.201 [2024-02-14 19:23:00.512178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.201 [2024-02-14 19:23:00.515241] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.201 [2024-02-14 19:23:00.515273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.201 [2024-02-14 19:23:00.515301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.201 [2024-02-14 19:23:00.518547] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.201 [2024-02-14 19:23:00.518576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.201 [2024-02-14 19:23:00.518587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.201 [2024-02-14 19:23:00.522447] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.201 [2024-02-14 19:23:00.522480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.201 [2024-02-14 19:23:00.522503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.201 [2024-02-14 19:23:00.525833] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.201 [2024-02-14 19:23:00.525865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.201 [2024-02-14 19:23:00.525877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.201 [2024-02-14 19:23:00.529758] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.201 [2024-02-14 19:23:00.529791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.201 [2024-02-14 19:23:00.529802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.201 [2024-02-14 19:23:00.533000] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.201 [2024-02-14 19:23:00.533032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.201 [2024-02-14 19:23:00.533043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.201 [2024-02-14 19:23:00.536455] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.201 [2024-02-14 19:23:00.536498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.201 [2024-02-14 19:23:00.536526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.201 [2024-02-14 19:23:00.540481] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.201 [2024-02-14 19:23:00.540523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.201 [2024-02-14 19:23:00.540535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.201 [2024-02-14 19:23:00.544120] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.201 [2024-02-14 19:23:00.544152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.201 [2024-02-14 19:23:00.544163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.201 [2024-02-14 19:23:00.547782] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.201 [2024-02-14 19:23:00.547814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.201 [2024-02-14 19:23:00.547825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.201 [2024-02-14 19:23:00.551257] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.201 [2024-02-14 19:23:00.551288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.201 [2024-02-14 19:23:00.551299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.201 [2024-02-14 19:23:00.555151] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.201 [2024-02-14 19:23:00.555183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.201 [2024-02-14 19:23:00.555219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.201 [2024-02-14 19:23:00.558347] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.201 [2024-02-14 19:23:00.558378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.201 [2024-02-14 19:23:00.558389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.201 [2024-02-14 19:23:00.562164] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.201 [2024-02-14 19:23:00.562195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.201 [2024-02-14 19:23:00.562207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.201 [2024-02-14 19:23:00.565706] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.201 [2024-02-14 19:23:00.565738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.201 [2024-02-14 19:23:00.565749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.201 [2024-02-14 19:23:00.568730] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.202 [2024-02-14 19:23:00.568761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.202 [2024-02-14 19:23:00.568771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.202 [2024-02-14 19:23:00.571809] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.202 [2024-02-14 19:23:00.571841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.202 [2024-02-14 19:23:00.571868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.202 [2024-02-14 19:23:00.575362] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.202 [2024-02-14 19:23:00.575393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.202 [2024-02-14 19:23:00.575404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.202 [2024-02-14 19:23:00.579315] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.202 [2024-02-14 19:23:00.579346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.202 [2024-02-14 19:23:00.579357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.202 [2024-02-14 19:23:00.582915] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.202 [2024-02-14 19:23:00.582964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.202 [2024-02-14 19:23:00.582992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.202 [2024-02-14 19:23:00.586391] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.202 [2024-02-14 19:23:00.586423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.202 [2024-02-14 19:23:00.586434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.202 [2024-02-14 19:23:00.590091] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.202 [2024-02-14 19:23:00.590123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.202 [2024-02-14 19:23:00.590134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.202 [2024-02-14 19:23:00.593832] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.202 [2024-02-14 19:23:00.593864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.202 [2024-02-14 19:23:00.593876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.202 [2024-02-14 19:23:00.597729] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.202 [2024-02-14 19:23:00.597761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.202 [2024-02-14 19:23:00.597772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.202 [2024-02-14 19:23:00.601555] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.202 [2024-02-14 19:23:00.601589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.202 [2024-02-14 19:23:00.601601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.202 [2024-02-14 19:23:00.605294] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.202 [2024-02-14 19:23:00.605324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.202 [2024-02-14 19:23:00.605335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.202 [2024-02-14 19:23:00.607862] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.202 [2024-02-14 19:23:00.607892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.202 [2024-02-14 19:23:00.607903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.202 [2024-02-14 19:23:00.612259] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.202 [2024-02-14 19:23:00.612292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.202 [2024-02-14 19:23:00.612319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.464 [2024-02-14 19:23:00.615796] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.464 [2024-02-14 19:23:00.615826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.464 [2024-02-14 19:23:00.615854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.464 [2024-02-14 19:23:00.619974] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.464 [2024-02-14 19:23:00.620006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.464 [2024-02-14 19:23:00.620018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.464 [2024-02-14 19:23:00.623792] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.464 [2024-02-14 19:23:00.623823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.464 [2024-02-14 19:23:00.623834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.464 [2024-02-14 19:23:00.627719] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.464 [2024-02-14 19:23:00.627751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.464 [2024-02-14 19:23:00.627763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.464 [2024-02-14 19:23:00.631751] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.464 [2024-02-14 19:23:00.631783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.464 [2024-02-14 19:23:00.631794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.464 [2024-02-14 19:23:00.635454] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.464 [2024-02-14 19:23:00.635496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.464 [2024-02-14 19:23:00.635510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.464 [2024-02-14 19:23:00.638437] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.464 [2024-02-14 19:23:00.638467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.464 [2024-02-14 19:23:00.638479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.464 [2024-02-14 19:23:00.641814] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.464 [2024-02-14 19:23:00.641846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.464 [2024-02-14 19:23:00.641856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.464 [2024-02-14 19:23:00.645342] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.464 [2024-02-14 19:23:00.645373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.464 [2024-02-14 19:23:00.645384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.464 [2024-02-14 19:23:00.648922] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.464 [2024-02-14 19:23:00.648956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.464 [2024-02-14 19:23:00.648983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.464 [2024-02-14 19:23:00.652626] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.464 [2024-02-14 19:23:00.652658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.464 [2024-02-14 19:23:00.652670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.464 [2024-02-14 19:23:00.656210] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.464 [2024-02-14 19:23:00.656243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.464 [2024-02-14 19:23:00.656254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.464 [2024-02-14 19:23:00.659618] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.464 [2024-02-14 19:23:00.659651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.464 [2024-02-14 19:23:00.659662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.464 [2024-02-14 19:23:00.663056] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.464 [2024-02-14 19:23:00.663090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.464 [2024-02-14 19:23:00.663101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.464 [2024-02-14 19:23:00.666588] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.464 [2024-02-14 19:23:00.666620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.464 [2024-02-14 19:23:00.666632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.464 [2024-02-14 19:23:00.669658] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.464 [2024-02-14 19:23:00.669689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.464 [2024-02-14 19:23:00.669700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.464 [2024-02-14 19:23:00.673513] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.464 [2024-02-14 19:23:00.673542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.464 [2024-02-14 19:23:00.673554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.465 [2024-02-14 19:23:00.677678] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.465 [2024-02-14 19:23:00.677708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.465 [2024-02-14 19:23:00.677719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.465 [2024-02-14 19:23:00.681452] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.465 [2024-02-14 19:23:00.681484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.465 [2024-02-14 19:23:00.681507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.465 [2024-02-14 19:23:00.685828] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.465 [2024-02-14 19:23:00.685861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.465 [2024-02-14 19:23:00.685872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.465 [2024-02-14 19:23:00.689526] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.465 [2024-02-14 19:23:00.689557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.465 [2024-02-14 19:23:00.689568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.465 [2024-02-14 19:23:00.693166] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.465 [2024-02-14 19:23:00.693197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.465 [2024-02-14 19:23:00.693208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.465 [2024-02-14 19:23:00.697105] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.465 [2024-02-14 19:23:00.697136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.465 [2024-02-14 19:23:00.697147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.465 [2024-02-14 19:23:00.700765] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.465 [2024-02-14 19:23:00.700797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.465 [2024-02-14 19:23:00.700808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.465 [2024-02-14 19:23:00.704554] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.465 [2024-02-14 19:23:00.704589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.465 [2024-02-14 19:23:00.704600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.465 [2024-02-14 19:23:00.707048] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.465 [2024-02-14 19:23:00.707080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.465 [2024-02-14 19:23:00.707091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.465 [2024-02-14 19:23:00.711080] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.465 [2024-02-14 19:23:00.711111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.465 [2024-02-14 19:23:00.711139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.465 [2024-02-14 19:23:00.715304] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.465 [2024-02-14 19:23:00.715337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.465 [2024-02-14 19:23:00.715348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.465 [2024-02-14 19:23:00.718715] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.465 [2024-02-14 19:23:00.718747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.465 [2024-02-14 19:23:00.718758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.465 [2024-02-14 19:23:00.721813] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.465 [2024-02-14 19:23:00.721846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.465 [2024-02-14 19:23:00.721873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.465 [2024-02-14 19:23:00.725221] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.465 [2024-02-14 19:23:00.725253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.465 [2024-02-14 19:23:00.725264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.465 [2024-02-14 19:23:00.728977] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.465 [2024-02-14 19:23:00.729009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.465 [2024-02-14 19:23:00.729020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.465 [2024-02-14 19:23:00.732110] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.465 [2024-02-14 19:23:00.732142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.465 [2024-02-14 19:23:00.732153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.465 [2024-02-14 19:23:00.736291] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.465 [2024-02-14 19:23:00.736324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.465 [2024-02-14 19:23:00.736335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.465 [2024-02-14 19:23:00.740081] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.465 [2024-02-14 19:23:00.740114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.465 [2024-02-14 19:23:00.740125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.465 [2024-02-14 19:23:00.743429] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.465 [2024-02-14 19:23:00.743461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.465 [2024-02-14 19:23:00.743487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.465 [2024-02-14 19:23:00.746778] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.465 [2024-02-14 19:23:00.746810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.465 [2024-02-14 19:23:00.746821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.465 [2024-02-14 19:23:00.750144] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.465 [2024-02-14 19:23:00.750176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.465 [2024-02-14 19:23:00.750187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.465 [2024-02-14 19:23:00.753935] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.465 [2024-02-14 19:23:00.753967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.465 [2024-02-14 19:23:00.753979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.465 [2024-02-14 19:23:00.757208] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.465 [2024-02-14 19:23:00.757240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.465 [2024-02-14 19:23:00.757251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.465 [2024-02-14 19:23:00.760911] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.465 [2024-02-14 19:23:00.760944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.465 [2024-02-14 19:23:00.760955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.465 [2024-02-14 19:23:00.764507] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.465 [2024-02-14 19:23:00.764538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.466 [2024-02-14 19:23:00.764549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.466 [2024-02-14 19:23:00.767896] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.466 [2024-02-14 19:23:00.767929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.466 [2024-02-14 19:23:00.767940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.466 [2024-02-14 19:23:00.771253] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.466 [2024-02-14 19:23:00.771285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.466 [2024-02-14 19:23:00.771296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.466 [2024-02-14 19:23:00.774771] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.466 [2024-02-14 19:23:00.774803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.466 [2024-02-14 19:23:00.774814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.466 [2024-02-14 19:23:00.778414] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.466 [2024-02-14 19:23:00.778447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.466 [2024-02-14 19:23:00.778475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.466 [2024-02-14 19:23:00.781979] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.466 [2024-02-14 19:23:00.782011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.466 [2024-02-14 19:23:00.782023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.466 [2024-02-14 19:23:00.785585] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.466 [2024-02-14 19:23:00.785618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.466 [2024-02-14 19:23:00.785629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.466 [2024-02-14 19:23:00.789711] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.466 [2024-02-14 19:23:00.789743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.466 [2024-02-14 19:23:00.789753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.466 [2024-02-14 19:23:00.793741] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.466 [2024-02-14 19:23:00.793773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.466 [2024-02-14 19:23:00.793784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.466 [2024-02-14 19:23:00.797780] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.466 [2024-02-14 19:23:00.797813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.466 [2024-02-14 19:23:00.797825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.466 [2024-02-14 19:23:00.801903] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.466 [2024-02-14 19:23:00.801934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.466 [2024-02-14 19:23:00.801945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.466 [2024-02-14 19:23:00.805539] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.466 [2024-02-14 19:23:00.805571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.466 [2024-02-14 19:23:00.805584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.466 [2024-02-14 19:23:00.809104] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.466 [2024-02-14 19:23:00.809137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.466 [2024-02-14 19:23:00.809149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.466 [2024-02-14 19:23:00.812913] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.466 [2024-02-14 19:23:00.812945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.466 [2024-02-14 19:23:00.812956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.466 [2024-02-14 19:23:00.815403] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.466 [2024-02-14 19:23:00.815434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.466 [2024-02-14 19:23:00.815444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.466 [2024-02-14 19:23:00.819048] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.466 [2024-02-14 19:23:00.819081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.466 [2024-02-14 19:23:00.819108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.466 [2024-02-14 19:23:00.822563] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.466 [2024-02-14 19:23:00.822593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.466 [2024-02-14 19:23:00.822620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.466 [2024-02-14 19:23:00.826734] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.466 [2024-02-14 19:23:00.826764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.466 [2024-02-14 19:23:00.826791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.466 [2024-02-14 19:23:00.831218] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.466 [2024-02-14 19:23:00.831250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.466 [2024-02-14 19:23:00.831277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.466 [2024-02-14 19:23:00.834909] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.466 [2024-02-14 19:23:00.834941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.466 [2024-02-14 19:23:00.834968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.466 [2024-02-14 19:23:00.838232] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.466 [2024-02-14 19:23:00.838264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.466 [2024-02-14 19:23:00.838276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.466 [2024-02-14 19:23:00.841714] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.466 [2024-02-14 19:23:00.841746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.466 [2024-02-14 19:23:00.841757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.466 [2024-02-14 19:23:00.844866] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.466 [2024-02-14 19:23:00.844898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.466 [2024-02-14 19:23:00.844909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.466 [2024-02-14 19:23:00.848960] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.466 [2024-02-14 19:23:00.848992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.466 [2024-02-14 19:23:00.849003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.466 [2024-02-14 19:23:00.852894] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.466 [2024-02-14 19:23:00.852925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.467 [2024-02-14 19:23:00.852936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.467 [2024-02-14 19:23:00.856234] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.467 [2024-02-14 19:23:00.856265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.467 [2024-02-14 19:23:00.856276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.467 [2024-02-14 19:23:00.860321] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.467 [2024-02-14 19:23:00.860353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.467 [2024-02-14 19:23:00.860364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.467 [2024-02-14 19:23:00.864819] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.467 [2024-02-14 19:23:00.864851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.467 [2024-02-14 19:23:00.864861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.467 [2024-02-14 19:23:00.867838] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.467 [2024-02-14 19:23:00.867870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.467 [2024-02-14 19:23:00.867881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.467 [2024-02-14 19:23:00.871492] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.467 [2024-02-14 19:23:00.871532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.467 [2024-02-14 19:23:00.871543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.467 [2024-02-14 19:23:00.875175] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.467 [2024-02-14 19:23:00.875207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.467 [2024-02-14 19:23:00.875235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.728 [2024-02-14 19:23:00.879241] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.728 [2024-02-14 19:23:00.879274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.728 [2024-02-14 19:23:00.879316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.728 [2024-02-14 19:23:00.882801] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.728 [2024-02-14 19:23:00.882831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.728 [2024-02-14 19:23:00.882842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.728 [2024-02-14 19:23:00.886830] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.728 [2024-02-14 19:23:00.886863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.728 [2024-02-14 19:23:00.886914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.728 [2024-02-14 19:23:00.890673] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.728 [2024-02-14 19:23:00.890705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.728 [2024-02-14 19:23:00.890716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.728 [2024-02-14 19:23:00.895156] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.728 [2024-02-14 19:23:00.895191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.728 [2024-02-14 19:23:00.895219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.728 [2024-02-14 19:23:00.898538] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.728 [2024-02-14 19:23:00.898570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.728 [2024-02-14 19:23:00.898581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.729 [2024-02-14 19:23:00.902323] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.729 [2024-02-14 19:23:00.902356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.729 [2024-02-14 19:23:00.902367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.729 [2024-02-14 19:23:00.906126] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.729 [2024-02-14 19:23:00.906156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.729 [2024-02-14 19:23:00.906167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.729 [2024-02-14 19:23:00.908655] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.729 [2024-02-14 19:23:00.908686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.729 [2024-02-14 19:23:00.908698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.729 [2024-02-14 19:23:00.912591] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.729 [2024-02-14 19:23:00.912624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.729 [2024-02-14 19:23:00.912635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.729 [2024-02-14 19:23:00.916729] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.729 [2024-02-14 19:23:00.916761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.729 [2024-02-14 19:23:00.916773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.729 [2024-02-14 19:23:00.920170] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.729 [2024-02-14 19:23:00.920202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.729 [2024-02-14 19:23:00.920213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.729 [2024-02-14 19:23:00.923553] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.729 [2024-02-14 19:23:00.923583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.729 [2024-02-14 19:23:00.923595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.729 [2024-02-14 19:23:00.926592] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.729 [2024-02-14 19:23:00.926623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.729 [2024-02-14 19:23:00.926634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.729 [2024-02-14 19:23:00.930818] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.729 [2024-02-14 19:23:00.930851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.729 [2024-02-14 19:23:00.930884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.729 [2024-02-14 19:23:00.935155] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.729 [2024-02-14 19:23:00.935223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.729 [2024-02-14 19:23:00.935235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.729 [2024-02-14 19:23:00.939928] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.729 [2024-02-14 19:23:00.939993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.729 [2024-02-14 19:23:00.940006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.729 [2024-02-14 19:23:00.944260] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.729 [2024-02-14 19:23:00.944291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.729 [2024-02-14 19:23:00.944318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.729 [2024-02-14 19:23:00.948287] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.729 [2024-02-14 19:23:00.948317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.729 [2024-02-14 19:23:00.948328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.729 [2024-02-14 19:23:00.952731] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.729 [2024-02-14 19:23:00.952762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.729 [2024-02-14 19:23:00.952773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.729 [2024-02-14 19:23:00.955793] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.729 [2024-02-14 19:23:00.955823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.729 [2024-02-14 19:23:00.955834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.729 [2024-02-14 19:23:00.959503] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.729 [2024-02-14 19:23:00.959532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.729 [2024-02-14 19:23:00.959543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.729 [2024-02-14 19:23:00.962963] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.729 [2024-02-14 19:23:00.962995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.729 [2024-02-14 19:23:00.963022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.729 [2024-02-14 19:23:00.966798] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.729 [2024-02-14 19:23:00.966830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.729 [2024-02-14 19:23:00.966841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.729 [2024-02-14 19:23:00.970352] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.729 [2024-02-14 19:23:00.970383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.729 [2024-02-14 19:23:00.970394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.729 [2024-02-14 19:23:00.974177] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.729 [2024-02-14 19:23:00.974207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.729 [2024-02-14 19:23:00.974217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.729 [2024-02-14 19:23:00.978202] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.729 [2024-02-14 19:23:00.978233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.729 [2024-02-14 19:23:00.978261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.729 [2024-02-14 19:23:00.981615] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.729 [2024-02-14 19:23:00.981644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.729 [2024-02-14 19:23:00.981671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.729 [2024-02-14 19:23:00.985466] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.729 [2024-02-14 19:23:00.985550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.729 [2024-02-14 19:23:00.985563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.729 [2024-02-14 19:23:00.989889] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.729 [2024-02-14 19:23:00.989965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.729 [2024-02-14 19:23:00.989992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.729 [2024-02-14 19:23:00.994692] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.729 [2024-02-14 19:23:00.994729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.729 [2024-02-14 19:23:00.994743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.729 [2024-02-14 19:23:00.998624] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.729 [2024-02-14 19:23:00.998661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.730 [2024-02-14 19:23:00.998673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.730 [2024-02-14 19:23:01.002226] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.730 [2024-02-14 19:23:01.002259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.730 [2024-02-14 19:23:01.002270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.730 [2024-02-14 19:23:01.006460] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.730 [2024-02-14 19:23:01.006533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.730 [2024-02-14 19:23:01.006549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.730 [2024-02-14 19:23:01.010608] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.730 [2024-02-14 19:23:01.010643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.730 [2024-02-14 19:23:01.010659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.730 [2024-02-14 19:23:01.014447] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.730 [2024-02-14 19:23:01.014479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.730 [2024-02-14 19:23:01.014523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.730 [2024-02-14 19:23:01.018155] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.730 [2024-02-14 19:23:01.018186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.730 [2024-02-14 19:23:01.018197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.730 [2024-02-14 19:23:01.021652] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.730 [2024-02-14 19:23:01.021688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.730 [2024-02-14 19:23:01.021701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.730 [2024-02-14 19:23:01.025489] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.730 [2024-02-14 19:23:01.025561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.730 [2024-02-14 19:23:01.025573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.730 [2024-02-14 19:23:01.029296] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.730 [2024-02-14 19:23:01.029327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.730 [2024-02-14 19:23:01.029338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.730 [2024-02-14 19:23:01.033775] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.730 [2024-02-14 19:23:01.033808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.730 [2024-02-14 19:23:01.033819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.730 [2024-02-14 19:23:01.037276] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.730 [2024-02-14 19:23:01.037308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.730 [2024-02-14 19:23:01.037319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.730 [2024-02-14 19:23:01.040555] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.730 [2024-02-14 19:23:01.040586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.730 [2024-02-14 19:23:01.040597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.730 [2024-02-14 19:23:01.043727] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.730 [2024-02-14 19:23:01.043759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.730 [2024-02-14 19:23:01.043787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.730 [2024-02-14 19:23:01.047281] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.730 [2024-02-14 19:23:01.047313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.730 [2024-02-14 19:23:01.047324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.730 [2024-02-14 19:23:01.050862] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.730 [2024-02-14 19:23:01.050918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.730 [2024-02-14 19:23:01.050945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.730 [2024-02-14 19:23:01.054539] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.730 [2024-02-14 19:23:01.054569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.730 [2024-02-14 19:23:01.054580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.730 [2024-02-14 19:23:01.058235] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.730 [2024-02-14 19:23:01.058267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.730 [2024-02-14 19:23:01.058279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.730 [2024-02-14 19:23:01.062187] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.730 [2024-02-14 19:23:01.062220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.730 [2024-02-14 19:23:01.062232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.730 [2024-02-14 19:23:01.065709] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.730 [2024-02-14 19:23:01.065742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.730 [2024-02-14 19:23:01.065754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.730 [2024-02-14 19:23:01.068964] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.730 [2024-02-14 19:23:01.068996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.730 [2024-02-14 19:23:01.069008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.730 [2024-02-14 19:23:01.072781] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.730 [2024-02-14 19:23:01.072814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.730 [2024-02-14 19:23:01.072842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.730 [2024-02-14 19:23:01.076157] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.730 [2024-02-14 19:23:01.076188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.730 [2024-02-14 19:23:01.076199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.730 [2024-02-14 19:23:01.080278] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.730 [2024-02-14 19:23:01.080309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.730 [2024-02-14 19:23:01.080320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.730 [2024-02-14 19:23:01.084362] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.730 [2024-02-14 19:23:01.084394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.730 [2024-02-14 19:23:01.084406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.730 [2024-02-14 19:23:01.087955] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.730 [2024-02-14 19:23:01.087987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.730 [2024-02-14 19:23:01.087999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.730 [2024-02-14 19:23:01.090888] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.731 [2024-02-14 19:23:01.090920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.731 [2024-02-14 19:23:01.090947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.731 [2024-02-14 19:23:01.094219] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.731 [2024-02-14 19:23:01.094251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.731 [2024-02-14 19:23:01.094262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.731 [2024-02-14 19:23:01.097574] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.731 [2024-02-14 19:23:01.097606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.731 [2024-02-14 19:23:01.097616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.731 [2024-02-14 19:23:01.101047] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.731 [2024-02-14 19:23:01.101080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.731 [2024-02-14 19:23:01.101092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.731 [2024-02-14 19:23:01.104771] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.731 [2024-02-14 19:23:01.104803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.731 [2024-02-14 19:23:01.104814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.731 [2024-02-14 19:23:01.108271] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.731 [2024-02-14 19:23:01.108303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.731 [2024-02-14 19:23:01.108314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.731 [2024-02-14 19:23:01.112015] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.731 [2024-02-14 19:23:01.112047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.731 [2024-02-14 19:23:01.112058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.731 [2024-02-14 19:23:01.115351] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.731 [2024-02-14 19:23:01.115384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.731 [2024-02-14 19:23:01.115411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.731 [2024-02-14 19:23:01.119058] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.731 [2024-02-14 19:23:01.119101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.731 [2024-02-14 19:23:01.119129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.731 [2024-02-14 19:23:01.122034] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.731 [2024-02-14 19:23:01.122066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.731 [2024-02-14 19:23:01.122078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.731 [2024-02-14 19:23:01.125320] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.731 [2024-02-14 19:23:01.125352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.731 [2024-02-14 19:23:01.125363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.731 [2024-02-14 19:23:01.128862] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.731 [2024-02-14 19:23:01.128895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.731 [2024-02-14 19:23:01.128907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.731 [2024-02-14 19:23:01.132584] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.731 [2024-02-14 19:23:01.132616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.731 [2024-02-14 19:23:01.132627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.731 [2024-02-14 19:23:01.136538] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.731 [2024-02-14 19:23:01.136565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.731 [2024-02-14 19:23:01.136576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.731 [2024-02-14 19:23:01.140589] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.731 [2024-02-14 19:23:01.140621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.731 [2024-02-14 19:23:01.140633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.992 [2024-02-14 19:23:01.144436] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.992 [2024-02-14 19:23:01.144468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.992 [2024-02-14 19:23:01.144496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.992 [2024-02-14 19:23:01.148021] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.992 [2024-02-14 19:23:01.148053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.992 [2024-02-14 19:23:01.148063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.992 [2024-02-14 19:23:01.152050] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.992 [2024-02-14 19:23:01.152082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.992 [2024-02-14 19:23:01.152093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.992 [2024-02-14 19:23:01.155454] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.992 [2024-02-14 19:23:01.155496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.992 [2024-02-14 19:23:01.155508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.992 [2024-02-14 19:23:01.159468] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.992 [2024-02-14 19:23:01.159519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.992 [2024-02-14 19:23:01.159533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.992 [2024-02-14 19:23:01.162781] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.992 [2024-02-14 19:23:01.162813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.992 [2024-02-14 19:23:01.162824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.992 [2024-02-14 19:23:01.166125] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.992 [2024-02-14 19:23:01.166158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.992 [2024-02-14 19:23:01.166169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.992 [2024-02-14 19:23:01.169177] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.992 [2024-02-14 19:23:01.169210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.992 [2024-02-14 19:23:01.169222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.993 [2024-02-14 19:23:01.172709] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.993 [2024-02-14 19:23:01.172741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.993 [2024-02-14 19:23:01.172752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.993 [2024-02-14 19:23:01.176268] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.993 [2024-02-14 19:23:01.176301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.993 [2024-02-14 19:23:01.176312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.993 [2024-02-14 19:23:01.179701] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.993 [2024-02-14 19:23:01.179733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.993 [2024-02-14 19:23:01.179744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.993 [2024-02-14 19:23:01.183337] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.993 [2024-02-14 19:23:01.183368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.993 [2024-02-14 19:23:01.183379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.993 [2024-02-14 19:23:01.187140] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.993 [2024-02-14 19:23:01.187173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.993 [2024-02-14 19:23:01.187200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.993 [2024-02-14 19:23:01.190603] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.993 [2024-02-14 19:23:01.190634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.993 [2024-02-14 19:23:01.190645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.993 [2024-02-14 19:23:01.194169] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.993 [2024-02-14 19:23:01.194202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.993 [2024-02-14 19:23:01.194213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.993 [2024-02-14 19:23:01.198014] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.993 [2024-02-14 19:23:01.198044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.993 [2024-02-14 19:23:01.198055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.993 [2024-02-14 19:23:01.201825] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.993 [2024-02-14 19:23:01.201858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.993 [2024-02-14 19:23:01.201869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.993 [2024-02-14 19:23:01.205705] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.993 [2024-02-14 19:23:01.205736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.993 [2024-02-14 19:23:01.205747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.993 [2024-02-14 19:23:01.209732] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.993 [2024-02-14 19:23:01.209762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.993 [2024-02-14 19:23:01.209774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.993 [2024-02-14 19:23:01.213533] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.993 [2024-02-14 19:23:01.213563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.993 [2024-02-14 19:23:01.213574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.993 [2024-02-14 19:23:01.216948] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.993 [2024-02-14 19:23:01.216979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.993 [2024-02-14 19:23:01.216990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.993 [2024-02-14 19:23:01.220181] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.993 [2024-02-14 19:23:01.220213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.993 [2024-02-14 19:23:01.220223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.993 [2024-02-14 19:23:01.223680] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.993 [2024-02-14 19:23:01.223711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.993 [2024-02-14 19:23:01.223723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.993 [2024-02-14 19:23:01.226831] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.993 [2024-02-14 19:23:01.226863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.993 [2024-02-14 19:23:01.226881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.993 [2024-02-14 19:23:01.230350] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.993 [2024-02-14 19:23:01.230383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.993 [2024-02-14 19:23:01.230394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.993 [2024-02-14 19:23:01.234168] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.993 [2024-02-14 19:23:01.234201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.993 [2024-02-14 19:23:01.234213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.993 [2024-02-14 19:23:01.237563] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.993 [2024-02-14 19:23:01.237595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.993 [2024-02-14 19:23:01.237607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.993 [2024-02-14 19:23:01.241211] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.993 [2024-02-14 19:23:01.241244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.993 [2024-02-14 19:23:01.241255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.993 [2024-02-14 19:23:01.244518] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.993 [2024-02-14 19:23:01.244549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.993 [2024-02-14 19:23:01.244560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.993 [2024-02-14 19:23:01.247759] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.993 [2024-02-14 19:23:01.247791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.993 [2024-02-14 19:23:01.247802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.993 [2024-02-14 19:23:01.251550] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.993 [2024-02-14 19:23:01.251581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.993 [2024-02-14 19:23:01.251592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.993 [2024-02-14 19:23:01.254960] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.993 [2024-02-14 19:23:01.254993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.993 [2024-02-14 19:23:01.255005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.993 [2024-02-14 19:23:01.258799] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.993 [2024-02-14 19:23:01.258831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.994 [2024-02-14 19:23:01.258842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.994 [2024-02-14 19:23:01.262373] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.994 [2024-02-14 19:23:01.262404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.994 [2024-02-14 19:23:01.262415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.994 [2024-02-14 19:23:01.265984] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.994 [2024-02-14 19:23:01.266015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.994 [2024-02-14 19:23:01.266027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.994 [2024-02-14 19:23:01.269757] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.994 [2024-02-14 19:23:01.269788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.994 [2024-02-14 19:23:01.269799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.994 [2024-02-14 19:23:01.273921] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.994 [2024-02-14 19:23:01.273952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.994 [2024-02-14 19:23:01.273963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.994 [2024-02-14 19:23:01.277724] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.994 [2024-02-14 19:23:01.277756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.994 [2024-02-14 19:23:01.277766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.994 [2024-02-14 19:23:01.280794] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.994 [2024-02-14 19:23:01.280826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.994 [2024-02-14 19:23:01.280838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.994 [2024-02-14 19:23:01.284197] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.994 [2024-02-14 19:23:01.284228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.994 [2024-02-14 19:23:01.284255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.994 [2024-02-14 19:23:01.287609] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.994 [2024-02-14 19:23:01.287641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.994 [2024-02-14 19:23:01.287652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.994 [2024-02-14 19:23:01.291362] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.994 [2024-02-14 19:23:01.291395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.994 [2024-02-14 19:23:01.291422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.994 [2024-02-14 19:23:01.295048] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.994 [2024-02-14 19:23:01.295082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.994 [2024-02-14 19:23:01.295109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.994 [2024-02-14 19:23:01.298382] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.994 [2024-02-14 19:23:01.298415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.994 [2024-02-14 19:23:01.298426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.994 [2024-02-14 19:23:01.301953] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.994 [2024-02-14 19:23:01.301986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.994 [2024-02-14 19:23:01.301997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.994 [2024-02-14 19:23:01.305435] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.994 [2024-02-14 19:23:01.305467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.994 [2024-02-14 19:23:01.305478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.994 [2024-02-14 19:23:01.309094] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.994 [2024-02-14 19:23:01.309127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.994 [2024-02-14 19:23:01.309139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.994 [2024-02-14 19:23:01.312413] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.994 [2024-02-14 19:23:01.312445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.994 [2024-02-14 19:23:01.312457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.994 [2024-02-14 19:23:01.316058] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.994 [2024-02-14 19:23:01.316090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.994 [2024-02-14 19:23:01.316101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.994 [2024-02-14 19:23:01.319160] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.994 [2024-02-14 19:23:01.319192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.994 [2024-02-14 19:23:01.319220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.994 [2024-02-14 19:23:01.322562] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.994 [2024-02-14 19:23:01.322595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.994 [2024-02-14 19:23:01.322607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.994 [2024-02-14 19:23:01.326064] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.994 [2024-02-14 19:23:01.326097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.994 [2024-02-14 19:23:01.326108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.994 [2024-02-14 19:23:01.329601] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.994 [2024-02-14 19:23:01.329633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.994 [2024-02-14 19:23:01.329644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.994 [2024-02-14 19:23:01.333126] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.994 [2024-02-14 19:23:01.333159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.994 [2024-02-14 19:23:01.333170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.994 [2024-02-14 19:23:01.336438] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.994 [2024-02-14 19:23:01.336470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.994 [2024-02-14 19:23:01.336481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.994 [2024-02-14 19:23:01.339952] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.994 [2024-02-14 19:23:01.339984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.994 [2024-02-14 19:23:01.340012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.994 [2024-02-14 19:23:01.343730] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.994 [2024-02-14 19:23:01.343764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.994 [2024-02-14 19:23:01.343792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.994 [2024-02-14 19:23:01.347508] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.995 [2024-02-14 19:23:01.347552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.995 [2024-02-14 19:23:01.347580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.995 [2024-02-14 19:23:01.352186] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.995 [2024-02-14 19:23:01.352221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.995 [2024-02-14 19:23:01.352249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.995 [2024-02-14 19:23:01.356711] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.995 [2024-02-14 19:23:01.356746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.995 [2024-02-14 19:23:01.356775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.995 [2024-02-14 19:23:01.360485] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.995 [2024-02-14 19:23:01.360560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.995 [2024-02-14 19:23:01.360573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.995 [2024-02-14 19:23:01.364268] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.995 [2024-02-14 19:23:01.364300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.995 [2024-02-14 19:23:01.364327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.995 [2024-02-14 19:23:01.367852] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.995 [2024-02-14 19:23:01.367898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.995 [2024-02-14 19:23:01.367926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.995 [2024-02-14 19:23:01.371725] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.995 [2024-02-14 19:23:01.371757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.995 [2024-02-14 19:23:01.371785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.995 [2024-02-14 19:23:01.375300] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.995 [2024-02-14 19:23:01.375331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.995 [2024-02-14 19:23:01.375359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.995 [2024-02-14 19:23:01.379375] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.995 [2024-02-14 19:23:01.379406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.995 [2024-02-14 19:23:01.379434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.995 [2024-02-14 19:23:01.383154] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.995 [2024-02-14 19:23:01.383186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.995 [2024-02-14 19:23:01.383197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.995 [2024-02-14 19:23:01.386865] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.995 [2024-02-14 19:23:01.386920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.995 [2024-02-14 19:23:01.386949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.995 [2024-02-14 19:23:01.390126] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.995 [2024-02-14 19:23:01.390158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.995 [2024-02-14 19:23:01.390184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.995 [2024-02-14 19:23:01.393333] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.995 [2024-02-14 19:23:01.393366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.995 [2024-02-14 19:23:01.393393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.995 [2024-02-14 19:23:01.397117] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.995 [2024-02-14 19:23:01.397149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.995 [2024-02-14 19:23:01.397176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.995 [2024-02-14 19:23:01.400705] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.995 [2024-02-14 19:23:01.400752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.995 [2024-02-14 19:23:01.400779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.995 [2024-02-14 19:23:01.404768] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:23.995 [2024-02-14 19:23:01.404803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.995 [2024-02-14 19:23:01.404832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.256 [2024-02-14 19:23:01.408815] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.256 [2024-02-14 19:23:01.408864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.257 [2024-02-14 19:23:01.408891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.257 [2024-02-14 19:23:01.412262] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.257 [2024-02-14 19:23:01.412293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.257 [2024-02-14 19:23:01.412320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.257 [2024-02-14 19:23:01.416663] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.257 [2024-02-14 19:23:01.416693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.257 [2024-02-14 19:23:01.416720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.257 [2024-02-14 19:23:01.420655] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.257 [2024-02-14 19:23:01.420687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.257 [2024-02-14 19:23:01.420715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.257 [2024-02-14 19:23:01.423749] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.257 [2024-02-14 19:23:01.423782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.257 [2024-02-14 19:23:01.423809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.257 [2024-02-14 19:23:01.427862] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.257 [2024-02-14 19:23:01.427894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.257 [2024-02-14 19:23:01.427921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.257 [2024-02-14 19:23:01.431742] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.257 [2024-02-14 19:23:01.431772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.257 [2024-02-14 19:23:01.431799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.257 [2024-02-14 19:23:01.435165] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.257 [2024-02-14 19:23:01.435214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.257 [2024-02-14 19:23:01.435241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.257 [2024-02-14 19:23:01.439416] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.257 [2024-02-14 19:23:01.439446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.257 [2024-02-14 19:23:01.439472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.257 [2024-02-14 19:23:01.443198] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.257 [2024-02-14 19:23:01.443231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.257 [2024-02-14 19:23:01.443258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.257 [2024-02-14 19:23:01.446907] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.257 [2024-02-14 19:23:01.446958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.257 [2024-02-14 19:23:01.446970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.257 [2024-02-14 19:23:01.450476] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.257 [2024-02-14 19:23:01.450535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.257 [2024-02-14 19:23:01.450563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.257 [2024-02-14 19:23:01.454283] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.257 [2024-02-14 19:23:01.454315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.257 [2024-02-14 19:23:01.454343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.257 [2024-02-14 19:23:01.458330] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.257 [2024-02-14 19:23:01.458362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.257 [2024-02-14 19:23:01.458390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.257 [2024-02-14 19:23:01.462431] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.257 [2024-02-14 19:23:01.462463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.257 [2024-02-14 19:23:01.462490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.257 [2024-02-14 19:23:01.465526] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.257 [2024-02-14 19:23:01.465554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.257 [2024-02-14 19:23:01.465581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.257 [2024-02-14 19:23:01.469624] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.257 [2024-02-14 19:23:01.469656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.257 [2024-02-14 19:23:01.469683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.257 [2024-02-14 19:23:01.472919] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.257 [2024-02-14 19:23:01.472951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.257 [2024-02-14 19:23:01.472978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.257 [2024-02-14 19:23:01.476667] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.257 [2024-02-14 19:23:01.476699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.257 [2024-02-14 19:23:01.476727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.257 [2024-02-14 19:23:01.480321] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.257 [2024-02-14 19:23:01.480353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.257 [2024-02-14 19:23:01.480379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.257 [2024-02-14 19:23:01.484061] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.257 [2024-02-14 19:23:01.484094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.257 [2024-02-14 19:23:01.484122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.257 [2024-02-14 19:23:01.487695] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.257 [2024-02-14 19:23:01.487727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.257 [2024-02-14 19:23:01.487754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.257 [2024-02-14 19:23:01.491071] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.257 [2024-02-14 19:23:01.491105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.257 [2024-02-14 19:23:01.491133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.257 [2024-02-14 19:23:01.494604] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.257 [2024-02-14 19:23:01.494635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.257 [2024-02-14 19:23:01.494662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.257 [2024-02-14 19:23:01.498412] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.257 [2024-02-14 19:23:01.498445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.258 [2024-02-14 19:23:01.498473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.258 [2024-02-14 19:23:01.502228] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.258 [2024-02-14 19:23:01.502260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.258 [2024-02-14 19:23:01.502287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.258 [2024-02-14 19:23:01.505741] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.258 [2024-02-14 19:23:01.505773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.258 [2024-02-14 19:23:01.505799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.258 [2024-02-14 19:23:01.508807] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.258 [2024-02-14 19:23:01.508840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.258 [2024-02-14 19:23:01.508868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.258 [2024-02-14 19:23:01.512975] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.258 [2024-02-14 19:23:01.513006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.258 [2024-02-14 19:23:01.513033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.258 [2024-02-14 19:23:01.516246] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.258 [2024-02-14 19:23:01.516276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.258 [2024-02-14 19:23:01.516303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.258 [2024-02-14 19:23:01.520378] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.258 [2024-02-14 19:23:01.520409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.258 [2024-02-14 19:23:01.520436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.258 [2024-02-14 19:23:01.524690] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.258 [2024-02-14 19:23:01.524722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.258 [2024-02-14 19:23:01.524749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.258 [2024-02-14 19:23:01.528604] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.258 [2024-02-14 19:23:01.528637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.258 [2024-02-14 19:23:01.528665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.258 [2024-02-14 19:23:01.532508] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.258 [2024-02-14 19:23:01.532557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.258 [2024-02-14 19:23:01.532597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.258 [2024-02-14 19:23:01.536128] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.258 [2024-02-14 19:23:01.536161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.258 [2024-02-14 19:23:01.536188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.258 [2024-02-14 19:23:01.540035] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.258 [2024-02-14 19:23:01.540067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.258 [2024-02-14 19:23:01.540078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.258 [2024-02-14 19:23:01.543583] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.258 [2024-02-14 19:23:01.543615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.258 [2024-02-14 19:23:01.543642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.258 [2024-02-14 19:23:01.546583] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.258 [2024-02-14 19:23:01.546615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.258 [2024-02-14 19:23:01.546642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.258 [2024-02-14 19:23:01.550637] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.258 [2024-02-14 19:23:01.550670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.258 [2024-02-14 19:23:01.550697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.258 [2024-02-14 19:23:01.553983] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.258 [2024-02-14 19:23:01.554015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.258 [2024-02-14 19:23:01.554027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.258 [2024-02-14 19:23:01.557434] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.258 [2024-02-14 19:23:01.557466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.258 [2024-02-14 19:23:01.557478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.258 [2024-02-14 19:23:01.561041] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.258 [2024-02-14 19:23:01.561072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.258 [2024-02-14 19:23:01.561083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.258 [2024-02-14 19:23:01.564512] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.258 [2024-02-14 19:23:01.564544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.258 [2024-02-14 19:23:01.564555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.258 [2024-02-14 19:23:01.568350] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.258 [2024-02-14 19:23:01.568383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.258 [2024-02-14 19:23:01.568394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.258 [2024-02-14 19:23:01.571963] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.258 [2024-02-14 19:23:01.571996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.258 [2024-02-14 19:23:01.572007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.258 [2024-02-14 19:23:01.575851] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.258 [2024-02-14 19:23:01.575883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.258 [2024-02-14 19:23:01.575893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.258 [2024-02-14 19:23:01.579188] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.258 [2024-02-14 19:23:01.579218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.258 [2024-02-14 19:23:01.579245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.258 [2024-02-14 19:23:01.582667] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.258 [2024-02-14 19:23:01.582698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.258 [2024-02-14 19:23:01.582709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.258 [2024-02-14 19:23:01.586019] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.258 [2024-02-14 19:23:01.586053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.258 [2024-02-14 19:23:01.586064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.259 [2024-02-14 19:23:01.589334] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.259 [2024-02-14 19:23:01.589366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.259 [2024-02-14 19:23:01.589377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.259 [2024-02-14 19:23:01.592445] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.259 [2024-02-14 19:23:01.592477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.259 [2024-02-14 19:23:01.592500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.259 [2024-02-14 19:23:01.595849] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.259 [2024-02-14 19:23:01.595881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.259 [2024-02-14 19:23:01.595892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.259 [2024-02-14 19:23:01.598965] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.259 [2024-02-14 19:23:01.598997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.259 [2024-02-14 19:23:01.599024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.259 [2024-02-14 19:23:01.602710] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.259 [2024-02-14 19:23:01.602742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.259 [2024-02-14 19:23:01.602753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.259 [2024-02-14 19:23:01.606201] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.259 [2024-02-14 19:23:01.606234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.259 [2024-02-14 19:23:01.606245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.259 [2024-02-14 19:23:01.610007] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.259 [2024-02-14 19:23:01.610041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.259 [2024-02-14 19:23:01.610053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.259 [2024-02-14 19:23:01.613742] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.259 [2024-02-14 19:23:01.613775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.259 [2024-02-14 19:23:01.613787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.259 [2024-02-14 19:23:01.617273] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.259 [2024-02-14 19:23:01.617306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.259 [2024-02-14 19:23:01.617316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.259 [2024-02-14 19:23:01.620437] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.259 [2024-02-14 19:23:01.620469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.259 [2024-02-14 19:23:01.620480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.259 [2024-02-14 19:23:01.624226] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.259 [2024-02-14 19:23:01.624259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.259 [2024-02-14 19:23:01.624270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.259 [2024-02-14 19:23:01.627818] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.259 [2024-02-14 19:23:01.627851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.259 [2024-02-14 19:23:01.627862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.259 [2024-02-14 19:23:01.631373] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.259 [2024-02-14 19:23:01.631404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.259 [2024-02-14 19:23:01.631415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.259 [2024-02-14 19:23:01.635254] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.259 [2024-02-14 19:23:01.635285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.259 [2024-02-14 19:23:01.635312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.259 [2024-02-14 19:23:01.638830] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.259 [2024-02-14 19:23:01.638860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.259 [2024-02-14 19:23:01.638917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.259 [2024-02-14 19:23:01.642239] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.259 [2024-02-14 19:23:01.642269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.259 [2024-02-14 19:23:01.642281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.259 [2024-02-14 19:23:01.645673] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.259 [2024-02-14 19:23:01.645704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.259 [2024-02-14 19:23:01.645715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.259 [2024-02-14 19:23:01.648975] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.259 [2024-02-14 19:23:01.649008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.259 [2024-02-14 19:23:01.649019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.259 [2024-02-14 19:23:01.652790] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.259 [2024-02-14 19:23:01.652821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.259 [2024-02-14 19:23:01.652848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.259 [2024-02-14 19:23:01.656567] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.259 [2024-02-14 19:23:01.656600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.259 [2024-02-14 19:23:01.656611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.259 [2024-02-14 19:23:01.660315] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.259 [2024-02-14 19:23:01.660347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.259 [2024-02-14 19:23:01.660358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.259 [2024-02-14 19:23:01.664325] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.259 [2024-02-14 19:23:01.664356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.259 [2024-02-14 19:23:01.664366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.259 [2024-02-14 19:23:01.668208] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.259 [2024-02-14 19:23:01.668240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.259 [2024-02-14 19:23:01.668251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.519 [2024-02-14 19:23:01.672266] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.519 [2024-02-14 19:23:01.672316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.519 [2024-02-14 19:23:01.672345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.519 [2024-02-14 19:23:01.676209] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.519 [2024-02-14 19:23:01.676239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.519 [2024-02-14 19:23:01.676251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.519 [2024-02-14 19:23:01.680009] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.519 [2024-02-14 19:23:01.680040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.519 [2024-02-14 19:23:01.680068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.520 [2024-02-14 19:23:01.683328] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.520 [2024-02-14 19:23:01.683360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.520 [2024-02-14 19:23:01.683371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.520 [2024-02-14 19:23:01.686420] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.520 [2024-02-14 19:23:01.686453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.520 [2024-02-14 19:23:01.686480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.520 [2024-02-14 19:23:01.690046] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.520 [2024-02-14 19:23:01.690078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.520 [2024-02-14 19:23:01.690088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.520 [2024-02-14 19:23:01.693330] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.520 [2024-02-14 19:23:01.693362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.520 [2024-02-14 19:23:01.693374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.520 [2024-02-14 19:23:01.696853] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.520 [2024-02-14 19:23:01.696886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.520 [2024-02-14 19:23:01.696897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.520 [2024-02-14 19:23:01.700705] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.520 [2024-02-14 19:23:01.700737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.520 [2024-02-14 19:23:01.700749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.520 [2024-02-14 19:23:01.704308] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.520 [2024-02-14 19:23:01.704341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.520 [2024-02-14 19:23:01.704352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.520 [2024-02-14 19:23:01.708016] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.520 [2024-02-14 19:23:01.708048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.520 [2024-02-14 19:23:01.708059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.520 [2024-02-14 19:23:01.711129] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.520 [2024-02-14 19:23:01.711162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.520 [2024-02-14 19:23:01.711189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.520 [2024-02-14 19:23:01.714656] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.520 [2024-02-14 19:23:01.714687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.520 [2024-02-14 19:23:01.714699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.520 [2024-02-14 19:23:01.718348] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.520 [2024-02-14 19:23:01.718380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.520 [2024-02-14 19:23:01.718391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.520 [2024-02-14 19:23:01.722162] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.520 [2024-02-14 19:23:01.722194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.520 [2024-02-14 19:23:01.722206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.520 [2024-02-14 19:23:01.725857] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.520 [2024-02-14 19:23:01.725890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.520 [2024-02-14 19:23:01.725901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.520 [2024-02-14 19:23:01.728653] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed2c40) 00:22:24.520 [2024-02-14 19:23:01.728685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.520 [2024-02-14 19:23:01.728713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.520 00:22:24.520 Latency(us) 00:22:24.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.520 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:24.520 nvme0n1 : 2.00 8357.83 1044.73 0.00 0.00 1911.54 629.29 6732.33 00:22:24.520 =================================================================================================================== 00:22:24.520 Total : 8357.83 1044.73 0.00 0.00 1911.54 629.29 6732.33 00:22:24.520 0 00:22:24.520 19:23:01 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:24.520 19:23:01 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:24.520 19:23:01 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:24.520 19:23:01 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:24.520 | .driver_specific 00:22:24.520 | .nvme_error 00:22:24.520 | .status_code 00:22:24.520 | .command_transient_transport_error' 00:22:24.780 19:23:01 -- host/digest.sh@71 -- # (( 539 > 0 )) 00:22:24.780 19:23:01 -- host/digest.sh@73 -- # killprocess 85431 00:22:24.780 19:23:01 -- common/autotest_common.sh@924 -- # '[' -z 85431 ']' 00:22:24.780 19:23:01 -- common/autotest_common.sh@928 -- # kill -0 85431 00:22:24.780 19:23:01 -- common/autotest_common.sh@929 -- # uname 00:22:24.780 19:23:01 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:22:24.780 19:23:01 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 85431 00:22:24.780 19:23:02 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:22:24.780 19:23:02 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:22:24.780 19:23:02 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 85431' 00:22:24.780 killing process with pid 85431 00:22:24.780 Received shutdown signal, test time was about 2.000000 seconds 00:22:24.780 00:22:24.780 Latency(us) 00:22:24.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.780 =================================================================================================================== 00:22:24.780 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:24.780 19:23:02 -- common/autotest_common.sh@943 -- # kill 85431 00:22:24.780 19:23:02 -- common/autotest_common.sh@948 -- # wait 85431 00:22:25.040 19:23:02 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:22:25.040 19:23:02 -- host/digest.sh@54 -- # local rw bs qd 00:22:25.040 19:23:02 -- host/digest.sh@56 -- # rw=randwrite 00:22:25.040 19:23:02 -- host/digest.sh@56 -- # bs=4096 00:22:25.040 19:23:02 -- host/digest.sh@56 -- # qd=128 00:22:25.040 19:23:02 -- host/digest.sh@58 -- # bperfpid=85516 00:22:25.040 19:23:02 -- host/digest.sh@60 -- # waitforlisten 85516 /var/tmp/bperf.sock 00:22:25.040 19:23:02 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:22:25.040 19:23:02 -- common/autotest_common.sh@817 -- # '[' -z 85516 ']' 00:22:25.040 19:23:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:25.040 19:23:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:25.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:25.040 19:23:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:25.040 19:23:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:25.040 19:23:02 -- common/autotest_common.sh@10 -- # set +x 00:22:25.040 [2024-02-14 19:23:02.285628] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:22:25.040 [2024-02-14 19:23:02.285724] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85516 ] 00:22:25.040 [2024-02-14 19:23:02.420071] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.300 [2024-02-14 19:23:02.491710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.868 19:23:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:25.868 19:23:03 -- common/autotest_common.sh@850 -- # return 0 00:22:25.868 19:23:03 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:25.868 19:23:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:26.127 19:23:03 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:26.127 19:23:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.127 19:23:03 -- common/autotest_common.sh@10 -- # set +x 00:22:26.127 19:23:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.127 19:23:03 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:26.127 19:23:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:26.386 nvme0n1 00:22:26.386 19:23:03 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:26.386 19:23:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.386 19:23:03 -- common/autotest_common.sh@10 -- # set +x 00:22:26.386 19:23:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.386 19:23:03 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:26.386 19:23:03 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:26.386 Running I/O for 2 seconds... 00:22:26.386 [2024-02-14 19:23:03.723472] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f6890 00:22:26.386 [2024-02-14 19:23:03.723869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.386 [2024-02-14 19:23:03.723904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:26.386 [2024-02-14 19:23:03.733514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190eff18 00:22:26.386 [2024-02-14 19:23:03.734018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.386 [2024-02-14 19:23:03.734070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:26.386 [2024-02-14 19:23:03.742476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f2510 00:22:26.386 [2024-02-14 19:23:03.743000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.386 [2024-02-14 19:23:03.743033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:26.386 [2024-02-14 19:23:03.751153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190fc128 00:22:26.386 [2024-02-14 19:23:03.752517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.386 [2024-02-14 19:23:03.752544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:26.386 [2024-02-14 19:23:03.760566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f20d8 00:22:26.386 [2024-02-14 19:23:03.761089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.386 [2024-02-14 19:23:03.761131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:26.386 [2024-02-14 19:23:03.769870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e5a90 00:22:26.386 [2024-02-14 19:23:03.771048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.386 [2024-02-14 19:23:03.771085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:26.386 [2024-02-14 19:23:03.779292] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e3d08 00:22:26.386 [2024-02-14 19:23:03.780627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.386 [2024-02-14 19:23:03.780660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:26.387 [2024-02-14 19:23:03.788473] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190eea00 00:22:26.387 [2024-02-14 19:23:03.789098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.387 [2024-02-14 19:23:03.789145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:26.387 [2024-02-14 19:23:03.797465] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f20d8 00:22:26.387 [2024-02-14 19:23:03.798091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.387 [2024-02-14 19:23:03.798137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:26.647 [2024-02-14 19:23:03.806463] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f3a28 00:22:26.647 [2024-02-14 19:23:03.807582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.647 [2024-02-14 19:23:03.807612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:26.647 [2024-02-14 19:23:03.815123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e5220 00:22:26.647 [2024-02-14 19:23:03.816115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.647 [2024-02-14 19:23:03.816145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:26.647 [2024-02-14 19:23:03.824064] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e1710 00:22:26.647 [2024-02-14 19:23:03.825249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.647 [2024-02-14 19:23:03.825278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:26.647 [2024-02-14 19:23:03.833455] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e5ec8 00:22:26.647 [2024-02-14 19:23:03.833804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.647 [2024-02-14 19:23:03.833836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:26.647 [2024-02-14 19:23:03.842432] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e7c50 00:22:26.647 [2024-02-14 19:23:03.842961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.647 [2024-02-14 19:23:03.843013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:26.647 [2024-02-14 19:23:03.851690] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e1710 00:22:26.647 [2024-02-14 19:23:03.852414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.647 [2024-02-14 19:23:03.852444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:26.647 [2024-02-14 19:23:03.860677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190ed920 00:22:26.647 [2024-02-14 19:23:03.861478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.647 [2024-02-14 19:23:03.861515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:26.647 [2024-02-14 19:23:03.869625] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e4de8 00:22:26.647 [2024-02-14 19:23:03.870096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.647 [2024-02-14 19:23:03.870129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:26.647 [2024-02-14 19:23:03.878613] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f1ca0 00:22:26.647 [2024-02-14 19:23:03.879174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.647 [2024-02-14 19:23:03.879212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:26.647 [2024-02-14 19:23:03.887667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190ed4e8 00:22:26.647 [2024-02-14 19:23:03.888323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.647 [2024-02-14 19:23:03.888354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:26.647 [2024-02-14 19:23:03.896779] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190ebb98 00:22:26.647 [2024-02-14 19:23:03.897416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.647 [2024-02-14 19:23:03.897446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:26.647 [2024-02-14 19:23:03.905723] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190ec840 00:22:26.647 [2024-02-14 19:23:03.906362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.647 [2024-02-14 19:23:03.906394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:26.647 [2024-02-14 19:23:03.914672] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190ed4e8 00:22:26.647 [2024-02-14 19:23:03.915529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.647 [2024-02-14 19:23:03.915566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:26.647 [2024-02-14 19:23:03.924915] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190ec840 00:22:26.647 [2024-02-14 19:23:03.925769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.647 [2024-02-14 19:23:03.925798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:26.647 [2024-02-14 19:23:03.934129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190feb58 00:22:26.647 [2024-02-14 19:23:03.935229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.647 [2024-02-14 19:23:03.935259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:26.647 [2024-02-14 19:23:03.942096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f1868 00:22:26.647 [2024-02-14 19:23:03.942590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.647 [2024-02-14 19:23:03.942629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:26.647 [2024-02-14 19:23:03.951119] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e3498 00:22:26.647 [2024-02-14 19:23:03.951585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.647 [2024-02-14 19:23:03.951615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:26.647 [2024-02-14 19:23:03.960110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190de470 00:22:26.647 [2024-02-14 19:23:03.960553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.647 [2024-02-14 19:23:03.960583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:26.647 [2024-02-14 19:23:03.969061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e73e0 00:22:26.647 [2024-02-14 19:23:03.969466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.648 [2024-02-14 19:23:03.969507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:26.648 [2024-02-14 19:23:03.978086] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f4298 00:22:26.648 [2024-02-14 19:23:03.978470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.648 [2024-02-14 19:23:03.978510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:26.648 [2024-02-14 19:23:03.987239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f4f40 00:22:26.648 [2024-02-14 19:23:03.987623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.648 [2024-02-14 19:23:03.987654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:26.648 [2024-02-14 19:23:03.996149] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f4298 00:22:26.648 [2024-02-14 19:23:03.996474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.648 [2024-02-14 19:23:03.996514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:26.648 [2024-02-14 19:23:04.005065] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e73e0 00:22:26.648 [2024-02-14 19:23:04.005388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.648 [2024-02-14 19:23:04.005421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:26.648 [2024-02-14 19:23:04.014051] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190de470 00:22:26.648 [2024-02-14 19:23:04.014719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.648 [2024-02-14 19:23:04.014748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:26.648 [2024-02-14 19:23:04.022711] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e6738 00:22:26.648 [2024-02-14 19:23:04.023089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.648 [2024-02-14 19:23:04.023121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:26.648 [2024-02-14 19:23:04.032041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f2d80 00:22:26.648 [2024-02-14 19:23:04.032191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.648 [2024-02-14 19:23:04.032208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:26.648 [2024-02-14 19:23:04.041094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f8e88 00:22:26.648 [2024-02-14 19:23:04.041401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.648 [2024-02-14 19:23:04.041448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:26.648 [2024-02-14 19:23:04.050444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f3a28 00:22:26.648 [2024-02-14 19:23:04.051732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.648 [2024-02-14 19:23:04.051763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:26.648 [2024-02-14 19:23:04.059622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f7100 00:22:26.648 [2024-02-14 19:23:04.060676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.648 [2024-02-14 19:23:04.060708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:26.908 [2024-02-14 19:23:04.068699] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f7100 00:22:26.908 [2024-02-14 19:23:04.069723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.908 [2024-02-14 19:23:04.069754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:26.908 [2024-02-14 19:23:04.077321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190de038 00:22:26.908 [2024-02-14 19:23:04.078108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.908 [2024-02-14 19:23:04.078138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:26.908 [2024-02-14 19:23:04.088238] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e2c28 00:22:26.908 [2024-02-14 19:23:04.088765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.908 [2024-02-14 19:23:04.088810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:26.908 [2024-02-14 19:23:04.097183] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190de8a8 00:22:26.908 [2024-02-14 19:23:04.098571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.908 [2024-02-14 19:23:04.098600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:26.908 [2024-02-14 19:23:04.105964] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190ec408 00:22:26.908 [2024-02-14 19:23:04.106663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.908 [2024-02-14 19:23:04.106694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:26.908 [2024-02-14 19:23:04.114758] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190ed0b0 00:22:26.908 [2024-02-14 19:23:04.116037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.908 [2024-02-14 19:23:04.116067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:26.908 [2024-02-14 19:23:04.124566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e0a68 00:22:26.908 [2024-02-14 19:23:04.125782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.908 [2024-02-14 19:23:04.125811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:26.908 [2024-02-14 19:23:04.133425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e5220 00:22:26.908 [2024-02-14 19:23:04.134090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.908 [2024-02-14 19:23:04.134120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:26.908 [2024-02-14 19:23:04.142702] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190de470 00:22:26.908 [2024-02-14 19:23:04.143184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.908 [2024-02-14 19:23:04.143216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:26.908 [2024-02-14 19:23:04.151548] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190ef6a8 00:22:26.908 [2024-02-14 19:23:04.152715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.908 [2024-02-14 19:23:04.152745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:26.908 [2024-02-14 19:23:04.160743] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e1b48 00:22:26.908 [2024-02-14 19:23:04.161270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.909 [2024-02-14 19:23:04.161300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:26.909 [2024-02-14 19:23:04.170607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f8618 00:22:26.909 [2024-02-14 19:23:04.171325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.909 [2024-02-14 19:23:04.171353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:26.909 [2024-02-14 19:23:04.178863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f3a28 00:22:26.909 [2024-02-14 19:23:04.179853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.909 [2024-02-14 19:23:04.179883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:26.909 [2024-02-14 19:23:04.188159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e4de8 00:22:26.909 [2024-02-14 19:23:04.188732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.909 [2024-02-14 19:23:04.188766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:26.909 [2024-02-14 19:23:04.197370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190ec840 00:22:26.909 [2024-02-14 19:23:04.198079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.909 [2024-02-14 19:23:04.198108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:26.909 [2024-02-14 19:23:04.205395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190fc128 00:22:26.909 [2024-02-14 19:23:04.205554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.909 [2024-02-14 19:23:04.205571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:26.909 [2024-02-14 19:23:04.215370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190ed0b0 00:22:26.909 [2024-02-14 19:23:04.216093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.909 [2024-02-14 19:23:04.216122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:26.909 [2024-02-14 19:23:04.224567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e84c0 00:22:26.909 [2024-02-14 19:23:04.225499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.909 [2024-02-14 19:23:04.225536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:26.909 [2024-02-14 19:23:04.233635] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190ed4e8 00:22:26.909 [2024-02-14 19:23:04.234442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.909 [2024-02-14 19:23:04.234472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:26.909 [2024-02-14 19:23:04.241671] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190df118 00:22:26.909 [2024-02-14 19:23:04.241843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.909 [2024-02-14 19:23:04.241901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:26.909 [2024-02-14 19:23:04.251208] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190eee38 00:22:26.909 [2024-02-14 19:23:04.252273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.909 [2024-02-14 19:23:04.252303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:26.909 [2024-02-14 19:23:04.260200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e6b70 00:22:26.909 [2024-02-14 19:23:04.261110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.909 [2024-02-14 19:23:04.261139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:26.909 [2024-02-14 19:23:04.269242] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190fe720 00:22:26.909 [2024-02-14 19:23:04.270129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.909 [2024-02-14 19:23:04.270158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:26.909 [2024-02-14 19:23:04.278290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190eb328 00:22:26.909 [2024-02-14 19:23:04.279215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.909 [2024-02-14 19:23:04.279244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:26.909 [2024-02-14 19:23:04.287542] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f6890 00:22:26.909 [2024-02-14 19:23:04.288545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.909 [2024-02-14 19:23:04.288576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:26.909 [2024-02-14 19:23:04.296766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e5220 00:22:26.909 [2024-02-14 19:23:04.297426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.909 [2024-02-14 19:23:04.297464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:26.909 [2024-02-14 19:23:04.308142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190ef6a8 00:22:26.909 [2024-02-14 19:23:04.309243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.909 [2024-02-14 19:23:04.309272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:26.909 [2024-02-14 19:23:04.314515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190df118 00:22:26.909 [2024-02-14 19:23:04.315413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.909 [2024-02-14 19:23:04.315442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:26.909 [2024-02-14 19:23:04.323820] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e4578 00:22:26.909 [2024-02-14 19:23:04.324202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.909 [2024-02-14 19:23:04.324235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:27.169 [2024-02-14 19:23:04.332932] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f35f0 00:22:27.169 [2024-02-14 19:23:04.333682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.169 [2024-02-14 19:23:04.333710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:27.169 [2024-02-14 19:23:04.343199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190fb8b8 00:22:27.169 [2024-02-14 19:23:04.343950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.169 [2024-02-14 19:23:04.343978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:27.169 [2024-02-14 19:23:04.352397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f4b08 00:22:27.169 [2024-02-14 19:23:04.353206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.169 [2024-02-14 19:23:04.353235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.169 [2024-02-14 19:23:04.360301] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190ef6a8 00:22:27.169 [2024-02-14 19:23:04.360654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.169 [2024-02-14 19:23:04.360683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:27.169 [2024-02-14 19:23:04.369819] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e0630 00:22:27.169 [2024-02-14 19:23:04.370848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.169 [2024-02-14 19:23:04.370883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:27.169 [2024-02-14 19:23:04.379168] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f6458 00:22:27.169 [2024-02-14 19:23:04.380312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.169 [2024-02-14 19:23:04.380341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.169 [2024-02-14 19:23:04.388440] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e49b0 00:22:27.169 [2024-02-14 19:23:04.389323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.169 [2024-02-14 19:23:04.389352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:27.169 [2024-02-14 19:23:04.397945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f3a28 00:22:27.169 [2024-02-14 19:23:04.398738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.169 [2024-02-14 19:23:04.398766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:27.169 [2024-02-14 19:23:04.405961] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e4de8 00:22:27.169 [2024-02-14 19:23:04.407133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.170 [2024-02-14 19:23:04.407163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:27.170 [2024-02-14 19:23:04.415098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190ee5c8 00:22:27.170 [2024-02-14 19:23:04.415817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.170 [2024-02-14 19:23:04.415847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:27.170 [2024-02-14 19:23:04.423395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e0630 00:22:27.170 [2024-02-14 19:23:04.423879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.170 [2024-02-14 19:23:04.423910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:27.170 [2024-02-14 19:23:04.432568] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e4de8 00:22:27.170 [2024-02-14 19:23:04.432640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.170 [2024-02-14 19:23:04.432658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:27.170 [2024-02-14 19:23:04.442352] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190eea00 00:22:27.170 [2024-02-14 19:23:04.443543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.170 [2024-02-14 19:23:04.443582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:27.170 [2024-02-14 19:23:04.453130] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f96f8 00:22:27.170 [2024-02-14 19:23:04.454059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.170 [2024-02-14 19:23:04.454086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:27.170 [2024-02-14 19:23:04.461297] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190fbcf0 00:22:27.170 [2024-02-14 19:23:04.462242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.170 [2024-02-14 19:23:04.462271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:27.170 [2024-02-14 19:23:04.471051] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e1b48 00:22:27.170 [2024-02-14 19:23:04.471742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.170 [2024-02-14 19:23:04.471789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:27.170 [2024-02-14 19:23:04.478562] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f4b08 00:22:27.170 [2024-02-14 19:23:04.479565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.170 [2024-02-14 19:23:04.479594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:27.170 [2024-02-14 19:23:04.488183] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190ee190 00:22:27.170 [2024-02-14 19:23:04.488874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.170 [2024-02-14 19:23:04.488902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:27.170 [2024-02-14 19:23:04.497302] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190fcdd0 00:22:27.170 [2024-02-14 19:23:04.498248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.170 [2024-02-14 19:23:04.498276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:27.170 [2024-02-14 19:23:04.506528] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190de038 00:22:27.170 [2024-02-14 19:23:04.506948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.170 [2024-02-14 19:23:04.506979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:27.170 [2024-02-14 19:23:04.515697] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190de8a8 00:22:27.170 [2024-02-14 19:23:04.516447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.170 [2024-02-14 19:23:04.516476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:27.170 [2024-02-14 19:23:04.525760] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190fa3a0 00:22:27.170 [2024-02-14 19:23:04.526216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.170 [2024-02-14 19:23:04.526248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:27.170 [2024-02-14 19:23:04.534872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f2510 00:22:27.170 [2024-02-14 19:23:04.535446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.170 [2024-02-14 19:23:04.535482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:27.170 [2024-02-14 19:23:04.543834] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f6458 00:22:27.170 [2024-02-14 19:23:04.545089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.170 [2024-02-14 19:23:04.545119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:27.170 [2024-02-14 19:23:04.553214] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e8d30 00:22:27.170 [2024-02-14 19:23:04.554047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.170 [2024-02-14 19:23:04.554076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:27.170 [2024-02-14 19:23:04.562933] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190de8a8 00:22:27.170 [2024-02-14 19:23:04.563630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.170 [2024-02-14 19:23:04.563658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:27.170 [2024-02-14 19:23:04.570965] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190eb760 00:22:27.170 [2024-02-14 19:23:04.571727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.170 [2024-02-14 19:23:04.571756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:27.170 [2024-02-14 19:23:04.580240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f7100 00:22:27.170 [2024-02-14 19:23:04.580693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.170 [2024-02-14 19:23:04.580739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:27.430 [2024-02-14 19:23:04.590250] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190ed4e8 00:22:27.430 [2024-02-14 19:23:04.591375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.430 [2024-02-14 19:23:04.591404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:27.430 [2024-02-14 19:23:04.599368] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e0630 00:22:27.430 [2024-02-14 19:23:04.600363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.430 [2024-02-14 19:23:04.600391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:27.430 [2024-02-14 19:23:04.608379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f81e0 00:22:27.430 [2024-02-14 19:23:04.609124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.430 [2024-02-14 19:23:04.609153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:27.430 [2024-02-14 19:23:04.617407] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e1b48 00:22:27.430 [2024-02-14 19:23:04.618124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.430 [2024-02-14 19:23:04.618153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:27.430 [2024-02-14 19:23:04.626470] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190ed920 00:22:27.430 [2024-02-14 19:23:04.627182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.430 [2024-02-14 19:23:04.627247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:27.430 [2024-02-14 19:23:04.635543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e1f80 00:22:27.430 [2024-02-14 19:23:04.636179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.430 [2024-02-14 19:23:04.636208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:27.430 [2024-02-14 19:23:04.644600] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190ed0b0 00:22:27.430 [2024-02-14 19:23:04.645216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.430 [2024-02-14 19:23:04.645256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:27.430 [2024-02-14 19:23:04.653651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190fa3a0 00:22:27.430 [2024-02-14 19:23:04.654232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.431 [2024-02-14 19:23:04.654285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:27.431 [2024-02-14 19:23:04.662702] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f3a28 00:22:27.431 [2024-02-14 19:23:04.663341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.431 [2024-02-14 19:23:04.663401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:27.431 [2024-02-14 19:23:04.671604] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e23b8 00:22:27.431 [2024-02-14 19:23:04.672727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.431 [2024-02-14 19:23:04.672756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:27.431 [2024-02-14 19:23:04.680737] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e7818 00:22:27.431 [2024-02-14 19:23:04.681268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.431 [2024-02-14 19:23:04.681297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:27.431 [2024-02-14 19:23:04.691880] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f9b30 00:22:27.431 [2024-02-14 19:23:04.692970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.431 [2024-02-14 19:23:04.692997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:27.431 [2024-02-14 19:23:04.698308] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190fcdd0 00:22:27.431 [2024-02-14 19:23:04.699388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.431 [2024-02-14 19:23:04.699417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:27.431 [2024-02-14 19:23:04.708971] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f4f40 00:22:27.431 [2024-02-14 19:23:04.709717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.431 [2024-02-14 19:23:04.709746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:27.431 [2024-02-14 19:23:04.717017] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e6fa8 00:22:27.431 [2024-02-14 19:23:04.717793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.431 [2024-02-14 19:23:04.717821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:27.431 [2024-02-14 19:23:04.726807] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e5220 00:22:27.431 [2024-02-14 19:23:04.727284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.431 [2024-02-14 19:23:04.727309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:27.431 [2024-02-14 19:23:04.735895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190fb8b8 00:22:27.431 [2024-02-14 19:23:04.736518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.431 [2024-02-14 19:23:04.736561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:27.431 [2024-02-14 19:23:04.744966] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e88f8 00:22:27.431 [2024-02-14 19:23:04.745534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.431 [2024-02-14 19:23:04.745599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:27.431 [2024-02-14 19:23:04.753989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f0788 00:22:27.431 [2024-02-14 19:23:04.754576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.431 [2024-02-14 19:23:04.754605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:27.431 [2024-02-14 19:23:04.763056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190eaab8 00:22:27.431 [2024-02-14 19:23:04.763606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.431 [2024-02-14 19:23:04.763636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:27.431 [2024-02-14 19:23:04.771829] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190dfdc0 00:22:27.431 [2024-02-14 19:23:04.772943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.431 [2024-02-14 19:23:04.772971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:27.431 [2024-02-14 19:23:04.781183] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190edd58 00:22:27.431 [2024-02-14 19:23:04.781646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.431 [2024-02-14 19:23:04.781676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:27.431 [2024-02-14 19:23:04.791089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190feb58 00:22:27.431 [2024-02-14 19:23:04.791692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.431 [2024-02-14 19:23:04.791736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:27.431 [2024-02-14 19:23:04.800237] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e1b48 00:22:27.431 [2024-02-14 19:23:04.801061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.431 [2024-02-14 19:23:04.801090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:27.431 [2024-02-14 19:23:04.808612] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f46d0 00:22:27.431 [2024-02-14 19:23:04.810034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.431 [2024-02-14 19:23:04.810065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:27.431 [2024-02-14 19:23:04.817716] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f8a50 00:22:27.431 [2024-02-14 19:23:04.819258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.431 [2024-02-14 19:23:04.819287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.431 [2024-02-14 19:23:04.826131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f6cc8 00:22:27.431 [2024-02-14 19:23:04.827225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.431 [2024-02-14 19:23:04.827271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:27.431 [2024-02-14 19:23:04.836835] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e23b8 00:22:27.431 [2024-02-14 19:23:04.837557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.431 [2024-02-14 19:23:04.837598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:27.431 [2024-02-14 19:23:04.844857] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e0630 00:22:27.431 [2024-02-14 19:23:04.846250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.431 [2024-02-14 19:23:04.846281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:27.691 [2024-02-14 19:23:04.853746] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190fa7d8 00:22:27.691 [2024-02-14 19:23:04.854609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.691 [2024-02-14 19:23:04.854638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:27.692 [2024-02-14 19:23:04.862696] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f35f0 00:22:27.692 [2024-02-14 19:23:04.863510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.692 [2024-02-14 19:23:04.863547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:27.692 [2024-02-14 19:23:04.872350] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e3d08 00:22:27.692 [2024-02-14 19:23:04.873741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.692 [2024-02-14 19:23:04.873769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:27.692 [2024-02-14 19:23:04.881524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e99d8 00:22:27.692 [2024-02-14 19:23:04.882321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.692 [2024-02-14 19:23:04.882350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:27.692 [2024-02-14 19:23:04.890608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e0ea0 00:22:27.692 [2024-02-14 19:23:04.892001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.692 [2024-02-14 19:23:04.892030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.692 [2024-02-14 19:23:04.899651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190fef90 00:22:27.692 [2024-02-14 19:23:04.901037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.692 [2024-02-14 19:23:04.901066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.692 [2024-02-14 19:23:04.907854] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f81e0 00:22:27.692 [2024-02-14 19:23:04.908480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.692 [2024-02-14 19:23:04.908520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:27.692 [2024-02-14 19:23:04.917025] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190fbcf0 00:22:27.692 [2024-02-14 19:23:04.917676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.692 [2024-02-14 19:23:04.917704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:27.692 [2024-02-14 19:23:04.927508] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e99d8 00:22:27.692 [2024-02-14 19:23:04.928916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.692 [2024-02-14 19:23:04.928945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.692 [2024-02-14 19:23:04.935983] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190fe720 00:22:27.692 [2024-02-14 19:23:04.937168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.692 [2024-02-14 19:23:04.937197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:27.692 [2024-02-14 19:23:04.944941] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190ec840 00:22:27.692 [2024-02-14 19:23:04.945903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.692 [2024-02-14 19:23:04.945948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:27.692 [2024-02-14 19:23:04.954543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f6458 00:22:27.692 [2024-02-14 19:23:04.955955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.692 [2024-02-14 19:23:04.955986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:27.692 [2024-02-14 19:23:04.963786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e49b0 00:22:27.692 [2024-02-14 19:23:04.964665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.692 [2024-02-14 19:23:04.964697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:27.692 [2024-02-14 19:23:04.972656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f5be8 00:22:27.692 [2024-02-14 19:23:04.973832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.692 [2024-02-14 19:23:04.973864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:27.692 [2024-02-14 19:23:04.982333] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190ef270 00:22:27.692 [2024-02-14 19:23:04.983518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.692 [2024-02-14 19:23:04.983572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:27.692 [2024-02-14 19:23:04.991419] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190edd58 00:22:27.692 [2024-02-14 19:23:04.991858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.692 [2024-02-14 19:23:04.991891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:27.692 [2024-02-14 19:23:04.999468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f20d8 00:22:27.692 [2024-02-14 19:23:04.999575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.692 [2024-02-14 19:23:04.999593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:27.692 [2024-02-14 19:23:05.010547] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f8a50 00:22:27.692 [2024-02-14 19:23:05.011076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.692 [2024-02-14 19:23:05.011109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:27.692 [2024-02-14 19:23:05.019627] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e8d30 00:22:27.692 [2024-02-14 19:23:05.020305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.692 [2024-02-14 19:23:05.020336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:27.692 [2024-02-14 19:23:05.028525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f9b30 00:22:27.692 [2024-02-14 19:23:05.029182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.692 [2024-02-14 19:23:05.029212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:27.692 [2024-02-14 19:23:05.036664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f7970 00:22:27.692 [2024-02-14 19:23:05.038276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.692 [2024-02-14 19:23:05.038305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:27.692 [2024-02-14 19:23:05.044986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f0bc0 00:22:27.692 [2024-02-14 19:23:05.045876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.692 [2024-02-14 19:23:05.045905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:27.692 [2024-02-14 19:23:05.056531] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e5a90 00:22:27.692 [2024-02-14 19:23:05.057263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.692 [2024-02-14 19:23:05.057292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:27.692 [2024-02-14 19:23:05.065353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e6fa8 00:22:27.692 [2024-02-14 19:23:05.067092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.692 [2024-02-14 19:23:05.067125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:27.692 [2024-02-14 19:23:05.074962] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190eb328 00:22:27.692 [2024-02-14 19:23:05.075352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.692 [2024-02-14 19:23:05.075384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:27.692 [2024-02-14 19:23:05.084419] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190fa7d8 00:22:27.693 [2024-02-14 19:23:05.085022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.693 [2024-02-14 19:23:05.085055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:27.693 [2024-02-14 19:23:05.093381] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190fbcf0 00:22:27.693 [2024-02-14 19:23:05.093892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.693 [2024-02-14 19:23:05.093924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:27.693 [2024-02-14 19:23:05.102437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f96f8 00:22:27.693 [2024-02-14 19:23:05.102954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.693 [2024-02-14 19:23:05.103004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:27.953 [2024-02-14 19:23:05.110432] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f6020 00:22:27.953 [2024-02-14 19:23:05.110685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.953 [2024-02-14 19:23:05.110720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:27.953 [2024-02-14 19:23:05.122118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f57b0 00:22:27.953 [2024-02-14 19:23:05.122779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.953 [2024-02-14 19:23:05.122810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:27.953 [2024-02-14 19:23:05.129640] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190fc128 00:22:27.953 [2024-02-14 19:23:05.130640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.953 [2024-02-14 19:23:05.130668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:27.953 [2024-02-14 19:23:05.139086] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e9e10 00:22:27.953 [2024-02-14 19:23:05.139353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.953 [2024-02-14 19:23:05.139384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:27.953 [2024-02-14 19:23:05.148237] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f7538 00:22:27.953 [2024-02-14 19:23:05.148664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.953 [2024-02-14 19:23:05.148695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:27.953 [2024-02-14 19:23:05.157335] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f31b8 00:22:27.953 [2024-02-14 19:23:05.157728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.953 [2024-02-14 19:23:05.157759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:27.953 [2024-02-14 19:23:05.166286] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190de470 00:22:27.953 [2024-02-14 19:23:05.166631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.953 [2024-02-14 19:23:05.166661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:27.953 [2024-02-14 19:23:05.175198] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e3498 00:22:27.953 [2024-02-14 19:23:05.175503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.953 [2024-02-14 19:23:05.175537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:27.953 [2024-02-14 19:23:05.184195] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e01f8 00:22:27.953 [2024-02-14 19:23:05.184474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.953 [2024-02-14 19:23:05.184507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:27.953 [2024-02-14 19:23:05.193246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f2510 00:22:27.953 [2024-02-14 19:23:05.193528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.953 [2024-02-14 19:23:05.193564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:27.953 [2024-02-14 19:23:05.202138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e12d8 00:22:27.953 [2024-02-14 19:23:05.202379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.953 [2024-02-14 19:23:05.202419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:27.953 [2024-02-14 19:23:05.211093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e27f0 00:22:27.953 [2024-02-14 19:23:05.211732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.953 [2024-02-14 19:23:05.211774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:27.953 [2024-02-14 19:23:05.220482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e5ec8 00:22:27.953 [2024-02-14 19:23:05.220750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.953 [2024-02-14 19:23:05.220774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:27.954 [2024-02-14 19:23:05.229517] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190ebb98 00:22:27.954 [2024-02-14 19:23:05.230161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.954 [2024-02-14 19:23:05.230190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:27.954 [2024-02-14 19:23:05.238619] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190fc128 00:22:27.954 [2024-02-14 19:23:05.239296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.954 [2024-02-14 19:23:05.239327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:27.954 [2024-02-14 19:23:05.249905] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e23b8 00:22:27.954 [2024-02-14 19:23:05.250986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.954 [2024-02-14 19:23:05.251016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:27.954 [2024-02-14 19:23:05.256602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e4de8 00:22:27.954 [2024-02-14 19:23:05.256878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.954 [2024-02-14 19:23:05.256903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:27.954 [2024-02-14 19:23:05.267041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f5be8 00:22:27.954 [2024-02-14 19:23:05.267771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.954 [2024-02-14 19:23:05.267797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:27.954 [2024-02-14 19:23:05.276234] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e38d0 00:22:27.954 [2024-02-14 19:23:05.277138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.954 [2024-02-14 19:23:05.277167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:27.954 [2024-02-14 19:23:05.285335] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e3498 00:22:27.954 [2024-02-14 19:23:05.285948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.954 [2024-02-14 19:23:05.285975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:27.954 [2024-02-14 19:23:05.294451] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f31b8 00:22:27.954 [2024-02-14 19:23:05.295926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.954 [2024-02-14 19:23:05.295957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:27.954 [2024-02-14 19:23:05.305174] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190ed4e8 00:22:27.954 [2024-02-14 19:23:05.306281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.954 [2024-02-14 19:23:05.306309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:27.954 [2024-02-14 19:23:05.311585] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190ef270 00:22:27.954 [2024-02-14 19:23:05.312461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.954 [2024-02-14 19:23:05.312498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:27.954 [2024-02-14 19:23:05.320817] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190ed920 00:22:27.954 [2024-02-14 19:23:05.321164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.954 [2024-02-14 19:23:05.321195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:27.954 [2024-02-14 19:23:05.331115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e5220 00:22:27.954 [2024-02-14 19:23:05.331967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.954 [2024-02-14 19:23:05.331995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.954 [2024-02-14 19:23:05.340049] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190fc998 00:22:27.954 [2024-02-14 19:23:05.341092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.954 [2024-02-14 19:23:05.341121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.954 [2024-02-14 19:23:05.349742] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f57b0 00:22:27.954 [2024-02-14 19:23:05.350499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.954 [2024-02-14 19:23:05.350533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.954 [2024-02-14 19:23:05.357221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e7818 00:22:27.954 [2024-02-14 19:23:05.358262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.954 [2024-02-14 19:23:05.358291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:27.954 [2024-02-14 19:23:05.366305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e8088 00:22:27.954 [2024-02-14 19:23:05.367588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.954 [2024-02-14 19:23:05.367630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:28.214 [2024-02-14 19:23:05.375704] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f8618 00:22:28.214 [2024-02-14 19:23:05.376425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.214 [2024-02-14 19:23:05.376454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:28.214 [2024-02-14 19:23:05.383832] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190dece0 00:22:28.214 [2024-02-14 19:23:05.383912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.214 [2024-02-14 19:23:05.383930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:28.214 [2024-02-14 19:23:05.393645] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e8d30 00:22:28.214 [2024-02-14 19:23:05.393853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.214 [2024-02-14 19:23:05.393890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:28.214 [2024-02-14 19:23:05.402873] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f9f68 00:22:28.214 [2024-02-14 19:23:05.403560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.214 [2024-02-14 19:23:05.403588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:28.214 [2024-02-14 19:23:05.411699] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e7c50 00:22:28.214 [2024-02-14 19:23:05.412153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.214 [2024-02-14 19:23:05.412183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:28.214 [2024-02-14 19:23:05.421385] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f2948 00:22:28.214 [2024-02-14 19:23:05.422588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.214 [2024-02-14 19:23:05.422615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:28.214 [2024-02-14 19:23:05.430506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190eea00 00:22:28.214 [2024-02-14 19:23:05.430920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.214 [2024-02-14 19:23:05.430950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:28.214 [2024-02-14 19:23:05.439839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190eea00 00:22:28.214 [2024-02-14 19:23:05.440613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.214 [2024-02-14 19:23:05.440643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:28.214 [2024-02-14 19:23:05.448989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190eff18 00:22:28.214 [2024-02-14 19:23:05.449761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.214 [2024-02-14 19:23:05.449789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:28.214 [2024-02-14 19:23:05.458087] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190eea00 00:22:28.214 [2024-02-14 19:23:05.458607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.214 [2024-02-14 19:23:05.458660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:28.214 [2024-02-14 19:23:05.467221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190fdeb0 00:22:28.214 [2024-02-14 19:23:05.467708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.214 [2024-02-14 19:23:05.467740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:28.214 [2024-02-14 19:23:05.476248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e84c0 00:22:28.214 [2024-02-14 19:23:05.476716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.214 [2024-02-14 19:23:05.476747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:28.214 [2024-02-14 19:23:05.485263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e01f8 00:22:28.214 [2024-02-14 19:23:05.485701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.214 [2024-02-14 19:23:05.485733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:28.214 [2024-02-14 19:23:05.494312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190ebb98 00:22:28.214 [2024-02-14 19:23:05.494801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.214 [2024-02-14 19:23:05.494832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:28.214 [2024-02-14 19:23:05.503312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190ef6a8 00:22:28.214 [2024-02-14 19:23:05.503833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.214 [2024-02-14 19:23:05.503887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:28.214 [2024-02-14 19:23:05.512359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190dece0 00:22:28.214 [2024-02-14 19:23:05.513007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.214 [2024-02-14 19:23:05.513049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:28.214 [2024-02-14 19:23:05.521398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f7970 00:22:28.214 [2024-02-14 19:23:05.521878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.214 [2024-02-14 19:23:05.521909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:28.214 [2024-02-14 19:23:05.530552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e6738 00:22:28.214 [2024-02-14 19:23:05.531158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.214 [2024-02-14 19:23:05.531203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:28.214 [2024-02-14 19:23:05.539700] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190dece0 00:22:28.214 [2024-02-14 19:23:05.540199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.214 [2024-02-14 19:23:05.540230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:28.214 [2024-02-14 19:23:05.548825] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e4140 00:22:28.214 [2024-02-14 19:23:05.549353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.214 [2024-02-14 19:23:05.549399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:28.214 [2024-02-14 19:23:05.557894] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190ebb98 00:22:28.214 [2024-02-14 19:23:05.558481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.214 [2024-02-14 19:23:05.558537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:28.214 [2024-02-14 19:23:05.566959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e84c0 00:22:28.214 [2024-02-14 19:23:05.567583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.214 [2024-02-14 19:23:05.567611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:28.214 [2024-02-14 19:23:05.575913] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f4b08 00:22:28.214 [2024-02-14 19:23:05.576561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.214 [2024-02-14 19:23:05.576589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:28.214 [2024-02-14 19:23:05.585014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f35f0 00:22:28.214 [2024-02-14 19:23:05.585585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.214 [2024-02-14 19:23:05.585628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:28.214 [2024-02-14 19:23:05.594264] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e2c28 00:22:28.214 [2024-02-14 19:23:05.595486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.214 [2024-02-14 19:23:05.595527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:28.214 [2024-02-14 19:23:05.603993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190df988 00:22:28.214 [2024-02-14 19:23:05.605193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.214 [2024-02-14 19:23:05.605222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:28.214 [2024-02-14 19:23:05.613187] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190ecc78 00:22:28.214 [2024-02-14 19:23:05.613999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.214 [2024-02-14 19:23:05.614027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:28.214 [2024-02-14 19:23:05.622068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e12d8 00:22:28.214 [2024-02-14 19:23:05.623618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.214 [2024-02-14 19:23:05.623649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:28.474 [2024-02-14 19:23:05.631213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e1710 00:22:28.474 [2024-02-14 19:23:05.631746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.474 [2024-02-14 19:23:05.631777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:28.474 [2024-02-14 19:23:05.639901] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190df118 00:22:28.474 [2024-02-14 19:23:05.640909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.474 [2024-02-14 19:23:05.640938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:28.474 [2024-02-14 19:23:05.648991] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f2948 00:22:28.474 [2024-02-14 19:23:05.649856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.474 [2024-02-14 19:23:05.649886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:28.474 [2024-02-14 19:23:05.658703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f0350 00:22:28.474 [2024-02-14 19:23:05.659251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.474 [2024-02-14 19:23:05.659309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:28.474 [2024-02-14 19:23:05.667919] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190ea248 00:22:28.474 [2024-02-14 19:23:05.668820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.474 [2024-02-14 19:23:05.668849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:28.474 [2024-02-14 19:23:05.676315] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190fe720 00:22:28.474 [2024-02-14 19:23:05.677164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.474 [2024-02-14 19:23:05.677192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:28.474 [2024-02-14 19:23:05.685291] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e1710 00:22:28.474 [2024-02-14 19:23:05.686689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.474 [2024-02-14 19:23:05.686720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:28.474 [2024-02-14 19:23:05.694861] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190f6cc8 00:22:28.474 [2024-02-14 19:23:05.695677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.474 [2024-02-14 19:23:05.695704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:28.474 [2024-02-14 19:23:05.703943] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bc80) with pdu=0x2000190e0630 00:22:28.474 [2024-02-14 19:23:05.704547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.474 [2024-02-14 19:23:05.704577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:28.474 00:22:28.474 Latency(us) 00:22:28.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.474 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:28.474 nvme0n1 : 2.00 27852.66 108.80 0.00 0.00 4591.19 1846.92 13107.20 00:22:28.474 =================================================================================================================== 00:22:28.474 Total : 27852.66 108.80 0.00 0.00 4591.19 1846.92 13107.20 00:22:28.474 0 00:22:28.474 19:23:05 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:28.474 19:23:05 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:28.474 | .driver_specific 00:22:28.474 | .nvme_error 00:22:28.474 | .status_code 00:22:28.474 | .command_transient_transport_error' 00:22:28.474 19:23:05 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:28.474 19:23:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:28.733 19:23:05 -- host/digest.sh@71 -- # (( 218 > 0 )) 00:22:28.733 19:23:05 -- host/digest.sh@73 -- # killprocess 85516 00:22:28.733 19:23:05 -- common/autotest_common.sh@924 -- # '[' -z 85516 ']' 00:22:28.733 19:23:05 -- common/autotest_common.sh@928 -- # kill -0 85516 00:22:28.733 19:23:05 -- common/autotest_common.sh@929 -- # uname 00:22:28.733 19:23:05 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:22:28.733 19:23:05 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 85516 00:22:28.733 19:23:05 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:22:28.733 19:23:05 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:22:28.733 killing process with pid 85516 00:22:28.733 19:23:05 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 85516' 00:22:28.733 Received shutdown signal, test time was about 2.000000 seconds 00:22:28.733 00:22:28.733 Latency(us) 00:22:28.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.733 =================================================================================================================== 00:22:28.733 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:28.733 19:23:05 -- common/autotest_common.sh@943 -- # kill 85516 00:22:28.733 19:23:05 -- common/autotest_common.sh@948 -- # wait 85516 00:22:28.992 19:23:06 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:22:28.992 19:23:06 -- host/digest.sh@54 -- # local rw bs qd 00:22:28.992 19:23:06 -- host/digest.sh@56 -- # rw=randwrite 00:22:28.992 19:23:06 -- host/digest.sh@56 -- # bs=131072 00:22:28.992 19:23:06 -- host/digest.sh@56 -- # qd=16 00:22:28.992 19:23:06 -- host/digest.sh@58 -- # bperfpid=85607 00:22:28.992 19:23:06 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:22:28.992 19:23:06 -- host/digest.sh@60 -- # waitforlisten 85607 /var/tmp/bperf.sock 00:22:28.992 19:23:06 -- common/autotest_common.sh@817 -- # '[' -z 85607 ']' 00:22:28.992 19:23:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:28.992 19:23:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:28.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:28.992 19:23:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:28.992 19:23:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:28.992 19:23:06 -- common/autotest_common.sh@10 -- # set +x 00:22:28.992 [2024-02-14 19:23:06.289808] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:22:28.992 [2024-02-14 19:23:06.289901] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85607 ] 00:22:28.992 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:28.992 Zero copy mechanism will not be used. 00:22:29.250 [2024-02-14 19:23:06.427428] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.250 [2024-02-14 19:23:06.500123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.185 19:23:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:30.185 19:23:07 -- common/autotest_common.sh@850 -- # return 0 00:22:30.185 19:23:07 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:30.185 19:23:07 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:30.185 19:23:07 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:30.185 19:23:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.185 19:23:07 -- common/autotest_common.sh@10 -- # set +x 00:22:30.185 19:23:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.185 19:23:07 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:30.185 19:23:07 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:30.443 nvme0n1 00:22:30.443 19:23:07 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:30.443 19:23:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.443 19:23:07 -- common/autotest_common.sh@10 -- # set +x 00:22:30.443 19:23:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.443 19:23:07 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:30.443 19:23:07 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:30.443 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:30.443 Zero copy mechanism will not be used. 00:22:30.443 Running I/O for 2 seconds... 00:22:30.443 [2024-02-14 19:23:07.817284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.443 [2024-02-14 19:23:07.817662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.443 [2024-02-14 19:23:07.817698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.443 [2024-02-14 19:23:07.821557] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.443 [2024-02-14 19:23:07.821771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.443 [2024-02-14 19:23:07.821792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.443 [2024-02-14 19:23:07.825553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.443 [2024-02-14 19:23:07.825650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.443 [2024-02-14 19:23:07.825671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.443 [2024-02-14 19:23:07.829418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.443 [2024-02-14 19:23:07.829523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.444 [2024-02-14 19:23:07.829545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.444 [2024-02-14 19:23:07.833473] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.444 [2024-02-14 19:23:07.833579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.444 [2024-02-14 19:23:07.833600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.444 [2024-02-14 19:23:07.837630] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.444 [2024-02-14 19:23:07.837714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.444 [2024-02-14 19:23:07.837735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.444 [2024-02-14 19:23:07.841706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.444 [2024-02-14 19:23:07.841834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.444 [2024-02-14 19:23:07.841854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.444 [2024-02-14 19:23:07.845849] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.444 [2024-02-14 19:23:07.846044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.444 [2024-02-14 19:23:07.846064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.444 [2024-02-14 19:23:07.849901] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.444 [2024-02-14 19:23:07.850070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.444 [2024-02-14 19:23:07.850090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.444 [2024-02-14 19:23:07.854008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.444 [2024-02-14 19:23:07.854103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.444 [2024-02-14 19:23:07.854122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.444 [2024-02-14 19:23:07.858040] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.444 [2024-02-14 19:23:07.858182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.444 [2024-02-14 19:23:07.858204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.704 [2024-02-14 19:23:07.862161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.704 [2024-02-14 19:23:07.862302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.704 [2024-02-14 19:23:07.862323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.704 [2024-02-14 19:23:07.866304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.704 [2024-02-14 19:23:07.866387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.704 [2024-02-14 19:23:07.866407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.704 [2024-02-14 19:23:07.870355] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.704 [2024-02-14 19:23:07.870528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.704 [2024-02-14 19:23:07.870549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.704 [2024-02-14 19:23:07.874389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.704 [2024-02-14 19:23:07.874588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.704 [2024-02-14 19:23:07.874608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.704 [2024-02-14 19:23:07.878508] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.704 [2024-02-14 19:23:07.878715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.704 [2024-02-14 19:23:07.878741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.704 [2024-02-14 19:23:07.882526] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.704 [2024-02-14 19:23:07.882789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.704 [2024-02-14 19:23:07.882828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.704 [2024-02-14 19:23:07.886614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.704 [2024-02-14 19:23:07.886835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.704 [2024-02-14 19:23:07.886856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.704 [2024-02-14 19:23:07.890614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.704 [2024-02-14 19:23:07.890759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.704 [2024-02-14 19:23:07.890778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.704 [2024-02-14 19:23:07.894696] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.704 [2024-02-14 19:23:07.894774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.704 [2024-02-14 19:23:07.894794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.704 [2024-02-14 19:23:07.898644] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.705 [2024-02-14 19:23:07.898739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.705 [2024-02-14 19:23:07.898758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.705 [2024-02-14 19:23:07.902757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.705 [2024-02-14 19:23:07.902908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.705 [2024-02-14 19:23:07.902944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.705 [2024-02-14 19:23:07.906732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.705 [2024-02-14 19:23:07.906936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.705 [2024-02-14 19:23:07.906957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.705 [2024-02-14 19:23:07.910901] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.705 [2024-02-14 19:23:07.911100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.705 [2024-02-14 19:23:07.911122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.705 [2024-02-14 19:23:07.914972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.705 [2024-02-14 19:23:07.915189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.705 [2024-02-14 19:23:07.915224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.705 [2024-02-14 19:23:07.918985] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.705 [2024-02-14 19:23:07.919063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.705 [2024-02-14 19:23:07.919082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.705 [2024-02-14 19:23:07.923134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.705 [2024-02-14 19:23:07.923272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.705 [2024-02-14 19:23:07.923292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.705 [2024-02-14 19:23:07.927150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.705 [2024-02-14 19:23:07.927291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.705 [2024-02-14 19:23:07.927311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.705 [2024-02-14 19:23:07.931263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.705 [2024-02-14 19:23:07.931344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.705 [2024-02-14 19:23:07.931364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.705 [2024-02-14 19:23:07.935320] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.705 [2024-02-14 19:23:07.935447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.705 [2024-02-14 19:23:07.935467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.705 [2024-02-14 19:23:07.939357] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.705 [2024-02-14 19:23:07.939605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.705 [2024-02-14 19:23:07.939643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.705 [2024-02-14 19:23:07.943502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.705 [2024-02-14 19:23:07.943680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.705 [2024-02-14 19:23:07.943700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.705 [2024-02-14 19:23:07.947459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.705 [2024-02-14 19:23:07.947774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.705 [2024-02-14 19:23:07.947805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.705 [2024-02-14 19:23:07.951401] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.705 [2024-02-14 19:23:07.951508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.705 [2024-02-14 19:23:07.951557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.705 [2024-02-14 19:23:07.955471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.705 [2024-02-14 19:23:07.955615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.705 [2024-02-14 19:23:07.955635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.705 [2024-02-14 19:23:07.959519] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.705 [2024-02-14 19:23:07.959631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.705 [2024-02-14 19:23:07.959651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.705 [2024-02-14 19:23:07.963524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.705 [2024-02-14 19:23:07.963628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.705 [2024-02-14 19:23:07.963647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.705 [2024-02-14 19:23:07.967555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.705 [2024-02-14 19:23:07.967706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.705 [2024-02-14 19:23:07.967726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.705 [2024-02-14 19:23:07.971540] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.705 [2024-02-14 19:23:07.971796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.705 [2024-02-14 19:23:07.971821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.705 [2024-02-14 19:23:07.975636] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.705 [2024-02-14 19:23:07.975819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.705 [2024-02-14 19:23:07.975839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.705 [2024-02-14 19:23:07.979645] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.705 [2024-02-14 19:23:07.979769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.705 [2024-02-14 19:23:07.979790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.705 [2024-02-14 19:23:07.983576] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.705 [2024-02-14 19:23:07.983659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.705 [2024-02-14 19:23:07.983679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.705 [2024-02-14 19:23:07.987629] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.705 [2024-02-14 19:23:07.987767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.705 [2024-02-14 19:23:07.987786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.705 [2024-02-14 19:23:07.991593] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.705 [2024-02-14 19:23:07.991681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.705 [2024-02-14 19:23:07.991700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.705 [2024-02-14 19:23:07.995548] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.705 [2024-02-14 19:23:07.995628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.705 [2024-02-14 19:23:07.995647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.705 [2024-02-14 19:23:07.999597] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.705 [2024-02-14 19:23:07.999749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.705 [2024-02-14 19:23:07.999769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.705 [2024-02-14 19:23:08.003620] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.705 [2024-02-14 19:23:08.003862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.706 [2024-02-14 19:23:08.003882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.706 [2024-02-14 19:23:08.007618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.706 [2024-02-14 19:23:08.007803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.706 [2024-02-14 19:23:08.007822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.706 [2024-02-14 19:23:08.011666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.706 [2024-02-14 19:23:08.011788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.706 [2024-02-14 19:23:08.011808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.706 [2024-02-14 19:23:08.015725] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.706 [2024-02-14 19:23:08.015836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.706 [2024-02-14 19:23:08.015855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.706 [2024-02-14 19:23:08.019757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.706 [2024-02-14 19:23:08.019910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.706 [2024-02-14 19:23:08.019930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.706 [2024-02-14 19:23:08.023706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.706 [2024-02-14 19:23:08.023802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.706 [2024-02-14 19:23:08.023822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.706 [2024-02-14 19:23:08.027694] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.706 [2024-02-14 19:23:08.027789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.706 [2024-02-14 19:23:08.027808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.706 [2024-02-14 19:23:08.031742] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.706 [2024-02-14 19:23:08.031891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.706 [2024-02-14 19:23:08.031911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.706 [2024-02-14 19:23:08.035728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.706 [2024-02-14 19:23:08.035913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.706 [2024-02-14 19:23:08.035933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.706 [2024-02-14 19:23:08.039765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.706 [2024-02-14 19:23:08.039949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.706 [2024-02-14 19:23:08.039969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.706 [2024-02-14 19:23:08.043704] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.706 [2024-02-14 19:23:08.043839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.706 [2024-02-14 19:23:08.043859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.706 [2024-02-14 19:23:08.047624] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.706 [2024-02-14 19:23:08.047718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.706 [2024-02-14 19:23:08.047737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.706 [2024-02-14 19:23:08.051647] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.706 [2024-02-14 19:23:08.051808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.706 [2024-02-14 19:23:08.051828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.706 [2024-02-14 19:23:08.055665] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.706 [2024-02-14 19:23:08.055777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.706 [2024-02-14 19:23:08.055797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.706 [2024-02-14 19:23:08.059639] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.706 [2024-02-14 19:23:08.059732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.706 [2024-02-14 19:23:08.059752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.706 [2024-02-14 19:23:08.063736] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.706 [2024-02-14 19:23:08.063886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.706 [2024-02-14 19:23:08.063907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.706 [2024-02-14 19:23:08.067748] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.706 [2024-02-14 19:23:08.067930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.706 [2024-02-14 19:23:08.067949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.706 [2024-02-14 19:23:08.071917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.706 [2024-02-14 19:23:08.072101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.706 [2024-02-14 19:23:08.072121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.706 [2024-02-14 19:23:08.075937] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.706 [2024-02-14 19:23:08.076048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.706 [2024-02-14 19:23:08.076067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.706 [2024-02-14 19:23:08.079936] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.706 [2024-02-14 19:23:08.080049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.706 [2024-02-14 19:23:08.080069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.706 [2024-02-14 19:23:08.083963] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.706 [2024-02-14 19:23:08.084125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.706 [2024-02-14 19:23:08.084145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.706 [2024-02-14 19:23:08.088059] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.706 [2024-02-14 19:23:08.088146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.706 [2024-02-14 19:23:08.088166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.706 [2024-02-14 19:23:08.092035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.706 [2024-02-14 19:23:08.092145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.706 [2024-02-14 19:23:08.092165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.706 [2024-02-14 19:23:08.096153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.706 [2024-02-14 19:23:08.096302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.706 [2024-02-14 19:23:08.096321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.706 [2024-02-14 19:23:08.100107] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.706 [2024-02-14 19:23:08.100306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.706 [2024-02-14 19:23:08.100325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.706 [2024-02-14 19:23:08.104280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.706 [2024-02-14 19:23:08.104482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.706 [2024-02-14 19:23:08.104515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.706 [2024-02-14 19:23:08.108374] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.706 [2024-02-14 19:23:08.108468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.706 [2024-02-14 19:23:08.108499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.706 [2024-02-14 19:23:08.112725] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.707 [2024-02-14 19:23:08.112832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.707 [2024-02-14 19:23:08.112853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.707 [2024-02-14 19:23:08.117217] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.707 [2024-02-14 19:23:08.117417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.707 [2024-02-14 19:23:08.117438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.967 [2024-02-14 19:23:08.121757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.967 [2024-02-14 19:23:08.121952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.967 [2024-02-14 19:23:08.121972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.967 [2024-02-14 19:23:08.126365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.967 [2024-02-14 19:23:08.126461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.967 [2024-02-14 19:23:08.126542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.967 [2024-02-14 19:23:08.131083] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.967 [2024-02-14 19:23:08.131316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.967 [2024-02-14 19:23:08.131367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.967 [2024-02-14 19:23:08.135765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.967 [2024-02-14 19:23:08.136060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.967 [2024-02-14 19:23:08.136314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.967 [2024-02-14 19:23:08.140293] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.967 [2024-02-14 19:23:08.140396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.967 [2024-02-14 19:23:08.140416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.967 [2024-02-14 19:23:08.144886] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.967 [2024-02-14 19:23:08.145004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.967 [2024-02-14 19:23:08.145024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.967 [2024-02-14 19:23:08.149073] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.967 [2024-02-14 19:23:08.149154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.967 [2024-02-14 19:23:08.149174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.967 [2024-02-14 19:23:08.153117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.967 [2024-02-14 19:23:08.153245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.967 [2024-02-14 19:23:08.153265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.967 [2024-02-14 19:23:08.157213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.967 [2024-02-14 19:23:08.157293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.967 [2024-02-14 19:23:08.157313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.967 [2024-02-14 19:23:08.161223] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.967 [2024-02-14 19:23:08.161307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.967 [2024-02-14 19:23:08.161326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.967 [2024-02-14 19:23:08.165364] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.967 [2024-02-14 19:23:08.165541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.967 [2024-02-14 19:23:08.165562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.967 [2024-02-14 19:23:08.169400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.967 [2024-02-14 19:23:08.169616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.967 [2024-02-14 19:23:08.169635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.967 [2024-02-14 19:23:08.173543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.967 [2024-02-14 19:23:08.173680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.967 [2024-02-14 19:23:08.173700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.967 [2024-02-14 19:23:08.177508] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.967 [2024-02-14 19:23:08.177655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.967 [2024-02-14 19:23:08.177675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.967 [2024-02-14 19:23:08.181482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.967 [2024-02-14 19:23:08.181590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.967 [2024-02-14 19:23:08.181609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.967 [2024-02-14 19:23:08.185591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.967 [2024-02-14 19:23:08.185727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.967 [2024-02-14 19:23:08.185746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.967 [2024-02-14 19:23:08.189573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.967 [2024-02-14 19:23:08.189668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.967 [2024-02-14 19:23:08.189688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.967 [2024-02-14 19:23:08.193582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.967 [2024-02-14 19:23:08.193673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.967 [2024-02-14 19:23:08.193693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.967 [2024-02-14 19:23:08.197730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.967 [2024-02-14 19:23:08.197882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.967 [2024-02-14 19:23:08.197901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.967 [2024-02-14 19:23:08.201758] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.967 [2024-02-14 19:23:08.202014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.967 [2024-02-14 19:23:08.202033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.967 [2024-02-14 19:23:08.205858] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.967 [2024-02-14 19:23:08.206002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.967 [2024-02-14 19:23:08.206021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.967 [2024-02-14 19:23:08.209850] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.967 [2024-02-14 19:23:08.210014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.968 [2024-02-14 19:23:08.210033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.968 [2024-02-14 19:23:08.213817] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.968 [2024-02-14 19:23:08.213946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.968 [2024-02-14 19:23:08.213966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.968 [2024-02-14 19:23:08.217898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.968 [2024-02-14 19:23:08.218057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.968 [2024-02-14 19:23:08.218077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.968 [2024-02-14 19:23:08.221967] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.968 [2024-02-14 19:23:08.222079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.968 [2024-02-14 19:23:08.222099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.968 [2024-02-14 19:23:08.226022] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.968 [2024-02-14 19:23:08.226101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.968 [2024-02-14 19:23:08.226120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.968 [2024-02-14 19:23:08.230076] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.968 [2024-02-14 19:23:08.230224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.968 [2024-02-14 19:23:08.230243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.968 [2024-02-14 19:23:08.234194] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.968 [2024-02-14 19:23:08.234412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.968 [2024-02-14 19:23:08.234431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.968 [2024-02-14 19:23:08.238343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.968 [2024-02-14 19:23:08.238478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.968 [2024-02-14 19:23:08.238510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.968 [2024-02-14 19:23:08.242351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.968 [2024-02-14 19:23:08.242499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.968 [2024-02-14 19:23:08.242519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.968 [2024-02-14 19:23:08.246336] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.968 [2024-02-14 19:23:08.246433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.968 [2024-02-14 19:23:08.246452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.968 [2024-02-14 19:23:08.250453] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.968 [2024-02-14 19:23:08.250608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.968 [2024-02-14 19:23:08.250628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.968 [2024-02-14 19:23:08.254544] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.968 [2024-02-14 19:23:08.254656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.968 [2024-02-14 19:23:08.254676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.968 [2024-02-14 19:23:08.258523] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.968 [2024-02-14 19:23:08.258607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.968 [2024-02-14 19:23:08.258627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.968 [2024-02-14 19:23:08.262567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.968 [2024-02-14 19:23:08.262732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.968 [2024-02-14 19:23:08.262752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.968 [2024-02-14 19:23:08.266604] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.968 [2024-02-14 19:23:08.266812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.968 [2024-02-14 19:23:08.266831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.968 [2024-02-14 19:23:08.270652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.968 [2024-02-14 19:23:08.270836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.968 [2024-02-14 19:23:08.270855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.968 [2024-02-14 19:23:08.274638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.968 [2024-02-14 19:23:08.274737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.968 [2024-02-14 19:23:08.274756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.968 [2024-02-14 19:23:08.278609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.968 [2024-02-14 19:23:08.278718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.968 [2024-02-14 19:23:08.278737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.968 [2024-02-14 19:23:08.282617] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.968 [2024-02-14 19:23:08.282744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.968 [2024-02-14 19:23:08.282764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.968 [2024-02-14 19:23:08.286680] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.968 [2024-02-14 19:23:08.286811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.968 [2024-02-14 19:23:08.286831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.968 [2024-02-14 19:23:08.290694] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.968 [2024-02-14 19:23:08.290770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.968 [2024-02-14 19:23:08.290789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.968 [2024-02-14 19:23:08.294800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.968 [2024-02-14 19:23:08.294979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.968 [2024-02-14 19:23:08.294999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.968 [2024-02-14 19:23:08.298833] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.968 [2024-02-14 19:23:08.299083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.968 [2024-02-14 19:23:08.299109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.968 [2024-02-14 19:23:08.302988] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.968 [2024-02-14 19:23:08.303179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.968 [2024-02-14 19:23:08.303205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.968 [2024-02-14 19:23:08.306871] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.968 [2024-02-14 19:23:08.307004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.968 [2024-02-14 19:23:08.307024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.968 [2024-02-14 19:23:08.310819] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.968 [2024-02-14 19:23:08.310979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.968 [2024-02-14 19:23:08.310999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.968 [2024-02-14 19:23:08.314918] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.968 [2024-02-14 19:23:08.315081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.968 [2024-02-14 19:23:08.315101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.968 [2024-02-14 19:23:08.318947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.968 [2024-02-14 19:23:08.319105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.969 [2024-02-14 19:23:08.319126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.969 [2024-02-14 19:23:08.322957] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.969 [2024-02-14 19:23:08.323053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.969 [2024-02-14 19:23:08.323073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.969 [2024-02-14 19:23:08.327075] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.969 [2024-02-14 19:23:08.327250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.969 [2024-02-14 19:23:08.327270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.969 [2024-02-14 19:23:08.331089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.969 [2024-02-14 19:23:08.331357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.969 [2024-02-14 19:23:08.331382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.969 [2024-02-14 19:23:08.335102] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.969 [2024-02-14 19:23:08.335382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.969 [2024-02-14 19:23:08.335407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.969 [2024-02-14 19:23:08.339144] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.969 [2024-02-14 19:23:08.339281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.969 [2024-02-14 19:23:08.339301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.969 [2024-02-14 19:23:08.343110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.969 [2024-02-14 19:23:08.343201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.969 [2024-02-14 19:23:08.343221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.969 [2024-02-14 19:23:08.347229] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.969 [2024-02-14 19:23:08.347420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.969 [2024-02-14 19:23:08.347440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.969 [2024-02-14 19:23:08.351152] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.969 [2024-02-14 19:23:08.351343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.969 [2024-02-14 19:23:08.351363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.969 [2024-02-14 19:23:08.355269] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.969 [2024-02-14 19:23:08.355378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.969 [2024-02-14 19:23:08.355397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.969 [2024-02-14 19:23:08.359345] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.969 [2024-02-14 19:23:08.359508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.969 [2024-02-14 19:23:08.359527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.969 [2024-02-14 19:23:08.363382] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.969 [2024-02-14 19:23:08.363626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.969 [2024-02-14 19:23:08.363652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.969 [2024-02-14 19:23:08.367445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.969 [2024-02-14 19:23:08.367674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.969 [2024-02-14 19:23:08.367696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.969 [2024-02-14 19:23:08.371521] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.969 [2024-02-14 19:23:08.371672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.969 [2024-02-14 19:23:08.371692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.969 [2024-02-14 19:23:08.375764] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.969 [2024-02-14 19:23:08.375873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.969 [2024-02-14 19:23:08.375893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.969 [2024-02-14 19:23:08.379895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:30.969 [2024-02-14 19:23:08.380070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.969 [2024-02-14 19:23:08.380091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.229 [2024-02-14 19:23:08.384085] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.229 [2024-02-14 19:23:08.384223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.229 [2024-02-14 19:23:08.384244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.229 [2024-02-14 19:23:08.388216] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.229 [2024-02-14 19:23:08.388313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.229 [2024-02-14 19:23:08.388334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.229 [2024-02-14 19:23:08.392523] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.229 [2024-02-14 19:23:08.392671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.229 [2024-02-14 19:23:08.392690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.229 [2024-02-14 19:23:08.396537] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.229 [2024-02-14 19:23:08.396805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.229 [2024-02-14 19:23:08.396831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.229 [2024-02-14 19:23:08.400666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.229 [2024-02-14 19:23:08.400759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.229 [2024-02-14 19:23:08.400778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.229 [2024-02-14 19:23:08.404784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.229 [2024-02-14 19:23:08.404892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.229 [2024-02-14 19:23:08.404912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.230 [2024-02-14 19:23:08.408738] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.230 [2024-02-14 19:23:08.408851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.230 [2024-02-14 19:23:08.408870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.230 [2024-02-14 19:23:08.412763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.230 [2024-02-14 19:23:08.412896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.230 [2024-02-14 19:23:08.412915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.230 [2024-02-14 19:23:08.416780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.230 [2024-02-14 19:23:08.416916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.230 [2024-02-14 19:23:08.416936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.230 [2024-02-14 19:23:08.420817] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.230 [2024-02-14 19:23:08.420894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.230 [2024-02-14 19:23:08.420914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.230 [2024-02-14 19:23:08.424933] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.230 [2024-02-14 19:23:08.425081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.230 [2024-02-14 19:23:08.425100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.230 [2024-02-14 19:23:08.429050] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.230 [2024-02-14 19:23:08.429255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.230 [2024-02-14 19:23:08.429273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.230 [2024-02-14 19:23:08.433174] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.230 [2024-02-14 19:23:08.433311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.230 [2024-02-14 19:23:08.433329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.230 [2024-02-14 19:23:08.437280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.230 [2024-02-14 19:23:08.437389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.230 [2024-02-14 19:23:08.437408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.230 [2024-02-14 19:23:08.441281] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.230 [2024-02-14 19:23:08.441389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.230 [2024-02-14 19:23:08.441409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.230 [2024-02-14 19:23:08.445323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.230 [2024-02-14 19:23:08.445449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.230 [2024-02-14 19:23:08.445468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.230 [2024-02-14 19:23:08.449458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.230 [2024-02-14 19:23:08.449576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.230 [2024-02-14 19:23:08.449595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.230 [2024-02-14 19:23:08.453442] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.230 [2024-02-14 19:23:08.453561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.230 [2024-02-14 19:23:08.453581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.230 [2024-02-14 19:23:08.457701] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.230 [2024-02-14 19:23:08.457856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.230 [2024-02-14 19:23:08.457890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.230 [2024-02-14 19:23:08.461693] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.230 [2024-02-14 19:23:08.461910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.230 [2024-02-14 19:23:08.461929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.230 [2024-02-14 19:23:08.465721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.230 [2024-02-14 19:23:08.465889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.230 [2024-02-14 19:23:08.465909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.230 [2024-02-14 19:23:08.469697] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.230 [2024-02-14 19:23:08.469812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.230 [2024-02-14 19:23:08.469831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.230 [2024-02-14 19:23:08.473665] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.230 [2024-02-14 19:23:08.473762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.230 [2024-02-14 19:23:08.473782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.230 [2024-02-14 19:23:08.477692] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.230 [2024-02-14 19:23:08.477876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.230 [2024-02-14 19:23:08.477895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.230 [2024-02-14 19:23:08.481710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.230 [2024-02-14 19:23:08.481825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.230 [2024-02-14 19:23:08.481844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.230 [2024-02-14 19:23:08.485710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.230 [2024-02-14 19:23:08.485804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.230 [2024-02-14 19:23:08.485823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.230 [2024-02-14 19:23:08.489807] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.230 [2024-02-14 19:23:08.489958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.230 [2024-02-14 19:23:08.489978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.230 [2024-02-14 19:23:08.493839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.230 [2024-02-14 19:23:08.494038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.230 [2024-02-14 19:23:08.494057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.230 [2024-02-14 19:23:08.497855] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.230 [2024-02-14 19:23:08.498036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.230 [2024-02-14 19:23:08.498056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.230 [2024-02-14 19:23:08.501884] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.230 [2024-02-14 19:23:08.501988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.230 [2024-02-14 19:23:08.502007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.230 [2024-02-14 19:23:08.505798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.230 [2024-02-14 19:23:08.505894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.230 [2024-02-14 19:23:08.505913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.230 [2024-02-14 19:23:08.509822] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.230 [2024-02-14 19:23:08.509957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.230 [2024-02-14 19:23:08.509977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.230 [2024-02-14 19:23:08.513820] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.231 [2024-02-14 19:23:08.513900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.231 [2024-02-14 19:23:08.513919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.231 [2024-02-14 19:23:08.517845] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.231 [2024-02-14 19:23:08.517947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.231 [2024-02-14 19:23:08.517966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.231 [2024-02-14 19:23:08.521936] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.231 [2024-02-14 19:23:08.522088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.231 [2024-02-14 19:23:08.522107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.231 [2024-02-14 19:23:08.526097] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.231 [2024-02-14 19:23:08.526304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.231 [2024-02-14 19:23:08.526323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.231 [2024-02-14 19:23:08.530241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.231 [2024-02-14 19:23:08.530414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.231 [2024-02-14 19:23:08.530434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.231 [2024-02-14 19:23:08.534287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.231 [2024-02-14 19:23:08.534385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.231 [2024-02-14 19:23:08.534405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.231 [2024-02-14 19:23:08.538330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.231 [2024-02-14 19:23:08.538409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.231 [2024-02-14 19:23:08.538427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.231 [2024-02-14 19:23:08.542396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.231 [2024-02-14 19:23:08.542542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.231 [2024-02-14 19:23:08.542563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.231 [2024-02-14 19:23:08.546536] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.231 [2024-02-14 19:23:08.546628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.231 [2024-02-14 19:23:08.546664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.231 [2024-02-14 19:23:08.550692] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.231 [2024-02-14 19:23:08.550781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.231 [2024-02-14 19:23:08.550801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.231 [2024-02-14 19:23:08.554980] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.231 [2024-02-14 19:23:08.555150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.231 [2024-02-14 19:23:08.555181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.231 [2024-02-14 19:23:08.559070] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.231 [2024-02-14 19:23:08.559369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.231 [2024-02-14 19:23:08.559394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.231 [2024-02-14 19:23:08.563054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.231 [2024-02-14 19:23:08.563174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.231 [2024-02-14 19:23:08.563194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.231 [2024-02-14 19:23:08.567110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.231 [2024-02-14 19:23:08.567293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.231 [2024-02-14 19:23:08.567312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.231 [2024-02-14 19:23:08.571169] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.231 [2024-02-14 19:23:08.571266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.231 [2024-02-14 19:23:08.571285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.231 [2024-02-14 19:23:08.575169] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.231 [2024-02-14 19:23:08.575299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.231 [2024-02-14 19:23:08.575318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.231 [2024-02-14 19:23:08.579221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.231 [2024-02-14 19:23:08.579326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.231 [2024-02-14 19:23:08.579345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.231 [2024-02-14 19:23:08.583161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.231 [2024-02-14 19:23:08.583269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.231 [2024-02-14 19:23:08.583288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.231 [2024-02-14 19:23:08.587295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.231 [2024-02-14 19:23:08.587438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.231 [2024-02-14 19:23:08.587458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.231 [2024-02-14 19:23:08.591322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.231 [2024-02-14 19:23:08.591449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.231 [2024-02-14 19:23:08.591468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.231 [2024-02-14 19:23:08.595355] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.231 [2024-02-14 19:23:08.595420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.231 [2024-02-14 19:23:08.595439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.231 [2024-02-14 19:23:08.599546] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.231 [2024-02-14 19:23:08.599680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.231 [2024-02-14 19:23:08.599699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.231 [2024-02-14 19:23:08.603547] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.231 [2024-02-14 19:23:08.603627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.231 [2024-02-14 19:23:08.603646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.231 [2024-02-14 19:23:08.607628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.231 [2024-02-14 19:23:08.607772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.231 [2024-02-14 19:23:08.607792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.231 [2024-02-14 19:23:08.611699] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.231 [2024-02-14 19:23:08.611823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.231 [2024-02-14 19:23:08.611859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.231 [2024-02-14 19:23:08.615791] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.231 [2024-02-14 19:23:08.615894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.231 [2024-02-14 19:23:08.615915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.231 [2024-02-14 19:23:08.619873] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.231 [2024-02-14 19:23:08.620043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.231 [2024-02-14 19:23:08.620064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.231 [2024-02-14 19:23:08.624163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.232 [2024-02-14 19:23:08.624408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.232 [2024-02-14 19:23:08.624427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.232 [2024-02-14 19:23:08.628356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.232 [2024-02-14 19:23:08.628589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.232 [2024-02-14 19:23:08.628611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.232 [2024-02-14 19:23:08.632467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.232 [2024-02-14 19:23:08.632671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.232 [2024-02-14 19:23:08.632692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.232 [2024-02-14 19:23:08.636621] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.232 [2024-02-14 19:23:08.636713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.232 [2024-02-14 19:23:08.636733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.232 [2024-02-14 19:23:08.640801] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.232 [2024-02-14 19:23:08.640973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.232 [2024-02-14 19:23:08.640994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.492 [2024-02-14 19:23:08.645035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.492 [2024-02-14 19:23:08.645152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.492 [2024-02-14 19:23:08.645173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.492 [2024-02-14 19:23:08.649132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.492 [2024-02-14 19:23:08.649246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.492 [2024-02-14 19:23:08.649265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.492 [2024-02-14 19:23:08.653272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.492 [2024-02-14 19:23:08.653441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.492 [2024-02-14 19:23:08.653461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.492 [2024-02-14 19:23:08.657446] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.492 [2024-02-14 19:23:08.657635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.492 [2024-02-14 19:23:08.657656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.492 [2024-02-14 19:23:08.661459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.492 [2024-02-14 19:23:08.661607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.492 [2024-02-14 19:23:08.661628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.492 [2024-02-14 19:23:08.665633] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.493 [2024-02-14 19:23:08.665773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.493 [2024-02-14 19:23:08.665792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.493 [2024-02-14 19:23:08.669664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.493 [2024-02-14 19:23:08.669801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.493 [2024-02-14 19:23:08.669821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.493 [2024-02-14 19:23:08.673663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.493 [2024-02-14 19:23:08.673803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.493 [2024-02-14 19:23:08.673823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.493 [2024-02-14 19:23:08.677568] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.493 [2024-02-14 19:23:08.677693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.493 [2024-02-14 19:23:08.677713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.493 [2024-02-14 19:23:08.681637] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.493 [2024-02-14 19:23:08.681740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.493 [2024-02-14 19:23:08.681760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.493 [2024-02-14 19:23:08.685759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.493 [2024-02-14 19:23:08.685917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.493 [2024-02-14 19:23:08.685938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.493 [2024-02-14 19:23:08.689816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.493 [2024-02-14 19:23:08.690030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.493 [2024-02-14 19:23:08.690049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.493 [2024-02-14 19:23:08.693904] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.493 [2024-02-14 19:23:08.694077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.493 [2024-02-14 19:23:08.694098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.493 [2024-02-14 19:23:08.697864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.493 [2024-02-14 19:23:08.698036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.493 [2024-02-14 19:23:08.698056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.493 [2024-02-14 19:23:08.701927] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.493 [2024-02-14 19:23:08.702052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.493 [2024-02-14 19:23:08.702073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.493 [2024-02-14 19:23:08.706084] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.493 [2024-02-14 19:23:08.706228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.493 [2024-02-14 19:23:08.706249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.493 [2024-02-14 19:23:08.710262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.493 [2024-02-14 19:23:08.710354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.493 [2024-02-14 19:23:08.710374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.493 [2024-02-14 19:23:08.714397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.493 [2024-02-14 19:23:08.714521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.493 [2024-02-14 19:23:08.714541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.493 [2024-02-14 19:23:08.718515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.493 [2024-02-14 19:23:08.718680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.493 [2024-02-14 19:23:08.718722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.493 [2024-02-14 19:23:08.722607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.493 [2024-02-14 19:23:08.722844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.493 [2024-02-14 19:23:08.722864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.493 [2024-02-14 19:23:08.726592] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.493 [2024-02-14 19:23:08.726715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.493 [2024-02-14 19:23:08.726735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.493 [2024-02-14 19:23:08.730709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.493 [2024-02-14 19:23:08.730891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.493 [2024-02-14 19:23:08.730927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.493 [2024-02-14 19:23:08.734707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.493 [2024-02-14 19:23:08.734831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.493 [2024-02-14 19:23:08.734851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.493 [2024-02-14 19:23:08.738712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.493 [2024-02-14 19:23:08.738853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.493 [2024-02-14 19:23:08.738874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.493 [2024-02-14 19:23:08.742763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.493 [2024-02-14 19:23:08.742861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.493 [2024-02-14 19:23:08.742880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.493 [2024-02-14 19:23:08.746778] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.493 [2024-02-14 19:23:08.746967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.493 [2024-02-14 19:23:08.746989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.493 [2024-02-14 19:23:08.750955] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.493 [2024-02-14 19:23:08.751116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.493 [2024-02-14 19:23:08.751136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.493 [2024-02-14 19:23:08.754975] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.493 [2024-02-14 19:23:08.755231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.493 [2024-02-14 19:23:08.755257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.493 [2024-02-14 19:23:08.758899] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.493 [2024-02-14 19:23:08.759063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.493 [2024-02-14 19:23:08.759084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.493 [2024-02-14 19:23:08.763032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.493 [2024-02-14 19:23:08.763164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.493 [2024-02-14 19:23:08.763184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.493 [2024-02-14 19:23:08.767069] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.493 [2024-02-14 19:23:08.767174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.493 [2024-02-14 19:23:08.767195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.493 [2024-02-14 19:23:08.771145] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.493 [2024-02-14 19:23:08.771342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.493 [2024-02-14 19:23:08.771363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.494 [2024-02-14 19:23:08.775270] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.494 [2024-02-14 19:23:08.775395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.494 [2024-02-14 19:23:08.775415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.494 [2024-02-14 19:23:08.779308] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.494 [2024-02-14 19:23:08.779428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.494 [2024-02-14 19:23:08.779448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.494 [2024-02-14 19:23:08.783464] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.494 [2024-02-14 19:23:08.783655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.494 [2024-02-14 19:23:08.783676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.494 [2024-02-14 19:23:08.787529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.494 [2024-02-14 19:23:08.787760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.494 [2024-02-14 19:23:08.787780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.494 [2024-02-14 19:23:08.791585] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.494 [2024-02-14 19:23:08.791749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.494 [2024-02-14 19:23:08.791769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.494 [2024-02-14 19:23:08.795599] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.494 [2024-02-14 19:23:08.795734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.494 [2024-02-14 19:23:08.795754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.494 [2024-02-14 19:23:08.799624] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.494 [2024-02-14 19:23:08.799724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.494 [2024-02-14 19:23:08.799744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.494 [2024-02-14 19:23:08.803653] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.494 [2024-02-14 19:23:08.803796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.494 [2024-02-14 19:23:08.803816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.494 [2024-02-14 19:23:08.807671] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.494 [2024-02-14 19:23:08.807760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.494 [2024-02-14 19:23:08.807779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.494 [2024-02-14 19:23:08.811706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.494 [2024-02-14 19:23:08.811802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.494 [2024-02-14 19:23:08.811821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.494 [2024-02-14 19:23:08.815850] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.494 [2024-02-14 19:23:08.816015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.494 [2024-02-14 19:23:08.816036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.494 [2024-02-14 19:23:08.819915] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.494 [2024-02-14 19:23:08.820129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.494 [2024-02-14 19:23:08.820149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.494 [2024-02-14 19:23:08.823888] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.494 [2024-02-14 19:23:08.824072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.494 [2024-02-14 19:23:08.824092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.494 [2024-02-14 19:23:08.827939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.494 [2024-02-14 19:23:08.828120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.494 [2024-02-14 19:23:08.828140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.494 [2024-02-14 19:23:08.832054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.494 [2024-02-14 19:23:08.832149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.494 [2024-02-14 19:23:08.832168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.494 [2024-02-14 19:23:08.836133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.494 [2024-02-14 19:23:08.836305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.494 [2024-02-14 19:23:08.836325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.494 [2024-02-14 19:23:08.840134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.494 [2024-02-14 19:23:08.840284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.494 [2024-02-14 19:23:08.840304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.494 [2024-02-14 19:23:08.844149] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.494 [2024-02-14 19:23:08.844266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.494 [2024-02-14 19:23:08.844285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.494 [2024-02-14 19:23:08.848227] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.494 [2024-02-14 19:23:08.848394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.494 [2024-02-14 19:23:08.848413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.494 [2024-02-14 19:23:08.852201] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.494 [2024-02-14 19:23:08.852474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.494 [2024-02-14 19:23:08.852522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.494 [2024-02-14 19:23:08.856256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.494 [2024-02-14 19:23:08.856348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.494 [2024-02-14 19:23:08.856367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.494 [2024-02-14 19:23:08.860357] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.494 [2024-02-14 19:23:08.860480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.494 [2024-02-14 19:23:08.860501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.494 [2024-02-14 19:23:08.864390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.494 [2024-02-14 19:23:08.864546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.494 [2024-02-14 19:23:08.864575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.494 [2024-02-14 19:23:08.868556] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.494 [2024-02-14 19:23:08.868698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.494 [2024-02-14 19:23:08.868718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.494 [2024-02-14 19:23:08.872676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.494 [2024-02-14 19:23:08.872763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.494 [2024-02-14 19:23:08.872782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.494 [2024-02-14 19:23:08.876598] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.494 [2024-02-14 19:23:08.876708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.494 [2024-02-14 19:23:08.876728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.494 [2024-02-14 19:23:08.880684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.494 [2024-02-14 19:23:08.880845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.494 [2024-02-14 19:23:08.880864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.495 [2024-02-14 19:23:08.884661] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.495 [2024-02-14 19:23:08.884782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.495 [2024-02-14 19:23:08.884802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.495 [2024-02-14 19:23:08.888671] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.495 [2024-02-14 19:23:08.888749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.495 [2024-02-14 19:23:08.888769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.495 [2024-02-14 19:23:08.892793] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.495 [2024-02-14 19:23:08.892899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.495 [2024-02-14 19:23:08.892918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.495 [2024-02-14 19:23:08.896855] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.495 [2024-02-14 19:23:08.896966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.495 [2024-02-14 19:23:08.896985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.495 [2024-02-14 19:23:08.900894] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.495 [2024-02-14 19:23:08.901022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.495 [2024-02-14 19:23:08.901042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.495 [2024-02-14 19:23:08.905018] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.495 [2024-02-14 19:23:08.905163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.495 [2024-02-14 19:23:08.905184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.755 [2024-02-14 19:23:08.909182] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.755 [2024-02-14 19:23:08.909271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.755 [2024-02-14 19:23:08.909290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.755 [2024-02-14 19:23:08.913372] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.755 [2024-02-14 19:23:08.913570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.755 [2024-02-14 19:23:08.913606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.755 [2024-02-14 19:23:08.917470] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.755 [2024-02-14 19:23:08.917644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.755 [2024-02-14 19:23:08.917664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.755 [2024-02-14 19:23:08.921461] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.755 [2024-02-14 19:23:08.921659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.755 [2024-02-14 19:23:08.921679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.755 [2024-02-14 19:23:08.925462] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.755 [2024-02-14 19:23:08.925669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.755 [2024-02-14 19:23:08.925690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.755 [2024-02-14 19:23:08.929569] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.755 [2024-02-14 19:23:08.929661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.755 [2024-02-14 19:23:08.929681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.755 [2024-02-14 19:23:08.933574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.755 [2024-02-14 19:23:08.933701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.755 [2024-02-14 19:23:08.933721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.755 [2024-02-14 19:23:08.937659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.755 [2024-02-14 19:23:08.937786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.755 [2024-02-14 19:23:08.937807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.755 [2024-02-14 19:23:08.941643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.755 [2024-02-14 19:23:08.941733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.755 [2024-02-14 19:23:08.941753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.755 [2024-02-14 19:23:08.945741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.755 [2024-02-14 19:23:08.945891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.755 [2024-02-14 19:23:08.945910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.755 [2024-02-14 19:23:08.949788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.755 [2024-02-14 19:23:08.950055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.756 [2024-02-14 19:23:08.950074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.756 [2024-02-14 19:23:08.953882] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.756 [2024-02-14 19:23:08.954068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.756 [2024-02-14 19:23:08.954087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.756 [2024-02-14 19:23:08.957889] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.756 [2024-02-14 19:23:08.958056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.756 [2024-02-14 19:23:08.958075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.756 [2024-02-14 19:23:08.961865] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.756 [2024-02-14 19:23:08.962008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.756 [2024-02-14 19:23:08.962027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.756 [2024-02-14 19:23:08.965908] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.756 [2024-02-14 19:23:08.966037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.756 [2024-02-14 19:23:08.966056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.756 [2024-02-14 19:23:08.969935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.756 [2024-02-14 19:23:08.970045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.756 [2024-02-14 19:23:08.970064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.756 [2024-02-14 19:23:08.973912] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.756 [2024-02-14 19:23:08.974023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.756 [2024-02-14 19:23:08.974042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.756 [2024-02-14 19:23:08.977934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.756 [2024-02-14 19:23:08.978083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.756 [2024-02-14 19:23:08.978103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.756 [2024-02-14 19:23:08.981946] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.756 [2024-02-14 19:23:08.982148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.756 [2024-02-14 19:23:08.982167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.756 [2024-02-14 19:23:08.985930] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.756 [2024-02-14 19:23:08.986084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.756 [2024-02-14 19:23:08.986103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.756 [2024-02-14 19:23:08.989932] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.756 [2024-02-14 19:23:08.990051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.756 [2024-02-14 19:23:08.990069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.756 [2024-02-14 19:23:08.993981] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.756 [2024-02-14 19:23:08.994077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.756 [2024-02-14 19:23:08.994097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.756 [2024-02-14 19:23:08.998015] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.756 [2024-02-14 19:23:08.998142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.756 [2024-02-14 19:23:08.998162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.756 [2024-02-14 19:23:09.002029] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.756 [2024-02-14 19:23:09.002118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.756 [2024-02-14 19:23:09.002138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.756 [2024-02-14 19:23:09.005967] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.756 [2024-02-14 19:23:09.006049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.756 [2024-02-14 19:23:09.006068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.756 [2024-02-14 19:23:09.010065] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.756 [2024-02-14 19:23:09.010216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.756 [2024-02-14 19:23:09.010235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.756 [2024-02-14 19:23:09.014018] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.756 [2024-02-14 19:23:09.014256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.756 [2024-02-14 19:23:09.014280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.756 [2024-02-14 19:23:09.018094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.756 [2024-02-14 19:23:09.018305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.756 [2024-02-14 19:23:09.018325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.756 [2024-02-14 19:23:09.022076] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.756 [2024-02-14 19:23:09.022245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.756 [2024-02-14 19:23:09.022264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.756 [2024-02-14 19:23:09.026068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.756 [2024-02-14 19:23:09.026153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.756 [2024-02-14 19:23:09.026172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.756 [2024-02-14 19:23:09.030115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.756 [2024-02-14 19:23:09.030242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.756 [2024-02-14 19:23:09.030261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.756 [2024-02-14 19:23:09.034168] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.756 [2024-02-14 19:23:09.034249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.756 [2024-02-14 19:23:09.034268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.756 [2024-02-14 19:23:09.038133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.756 [2024-02-14 19:23:09.038215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.756 [2024-02-14 19:23:09.038234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.756 [2024-02-14 19:23:09.042288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.756 [2024-02-14 19:23:09.042439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.756 [2024-02-14 19:23:09.042458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.756 [2024-02-14 19:23:09.046244] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.756 [2024-02-14 19:23:09.046428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.756 [2024-02-14 19:23:09.046448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.756 [2024-02-14 19:23:09.050427] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.756 [2024-02-14 19:23:09.050662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.756 [2024-02-14 19:23:09.050699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.756 [2024-02-14 19:23:09.054796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.756 [2024-02-14 19:23:09.054957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.756 [2024-02-14 19:23:09.054980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.756 [2024-02-14 19:23:09.058943] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.756 [2024-02-14 19:23:09.059070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.757 [2024-02-14 19:23:09.059090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.757 [2024-02-14 19:23:09.063096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.757 [2024-02-14 19:23:09.063230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.757 [2024-02-14 19:23:09.063249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.757 [2024-02-14 19:23:09.067156] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.757 [2024-02-14 19:23:09.067266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.757 [2024-02-14 19:23:09.067285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.757 [2024-02-14 19:23:09.071138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.757 [2024-02-14 19:23:09.071261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.757 [2024-02-14 19:23:09.071280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.757 [2024-02-14 19:23:09.075371] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.757 [2024-02-14 19:23:09.075534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.757 [2024-02-14 19:23:09.075554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.757 [2024-02-14 19:23:09.079403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.757 [2024-02-14 19:23:09.079621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.757 [2024-02-14 19:23:09.079641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.757 [2024-02-14 19:23:09.083504] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.757 [2024-02-14 19:23:09.083761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.757 [2024-02-14 19:23:09.083787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.757 [2024-02-14 19:23:09.087632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.757 [2024-02-14 19:23:09.087762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.757 [2024-02-14 19:23:09.087782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.757 [2024-02-14 19:23:09.091604] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.757 [2024-02-14 19:23:09.091682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.757 [2024-02-14 19:23:09.091701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.757 [2024-02-14 19:23:09.095639] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.757 [2024-02-14 19:23:09.095767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.757 [2024-02-14 19:23:09.095786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.757 [2024-02-14 19:23:09.099673] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.757 [2024-02-14 19:23:09.099812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.757 [2024-02-14 19:23:09.099832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.757 [2024-02-14 19:23:09.103723] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.757 [2024-02-14 19:23:09.103824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.757 [2024-02-14 19:23:09.103843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.757 [2024-02-14 19:23:09.107785] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.757 [2024-02-14 19:23:09.107941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.757 [2024-02-14 19:23:09.107961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.757 [2024-02-14 19:23:09.111791] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.757 [2024-02-14 19:23:09.111977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.757 [2024-02-14 19:23:09.111996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.757 [2024-02-14 19:23:09.115865] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.757 [2024-02-14 19:23:09.116004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.757 [2024-02-14 19:23:09.116023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.757 [2024-02-14 19:23:09.120042] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.757 [2024-02-14 19:23:09.120181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.757 [2024-02-14 19:23:09.120200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.757 [2024-02-14 19:23:09.124176] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.757 [2024-02-14 19:23:09.124253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.757 [2024-02-14 19:23:09.124272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.757 [2024-02-14 19:23:09.128498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.757 [2024-02-14 19:23:09.128658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.757 [2024-02-14 19:23:09.128678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.757 [2024-02-14 19:23:09.132748] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.757 [2024-02-14 19:23:09.132866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.757 [2024-02-14 19:23:09.132886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.757 [2024-02-14 19:23:09.136958] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.757 [2024-02-14 19:23:09.137082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.757 [2024-02-14 19:23:09.137102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.757 [2024-02-14 19:23:09.141385] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.757 [2024-02-14 19:23:09.141617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.757 [2024-02-14 19:23:09.141638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.757 [2024-02-14 19:23:09.145688] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.757 [2024-02-14 19:23:09.145836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.757 [2024-02-14 19:23:09.145872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.757 [2024-02-14 19:23:09.150112] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.757 [2024-02-14 19:23:09.150211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.757 [2024-02-14 19:23:09.150230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.757 [2024-02-14 19:23:09.154407] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.757 [2024-02-14 19:23:09.154609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.757 [2024-02-14 19:23:09.154631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.757 [2024-02-14 19:23:09.158470] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.757 [2024-02-14 19:23:09.158605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.757 [2024-02-14 19:23:09.158626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.757 [2024-02-14 19:23:09.162592] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.757 [2024-02-14 19:23:09.162735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.757 [2024-02-14 19:23:09.162755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.757 [2024-02-14 19:23:09.166712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:31.757 [2024-02-14 19:23:09.166812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.757 [2024-02-14 19:23:09.166834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.018 [2024-02-14 19:23:09.170898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.018 [2024-02-14 19:23:09.171038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.018 [2024-02-14 19:23:09.171059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.018 [2024-02-14 19:23:09.175125] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.018 [2024-02-14 19:23:09.175335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.018 [2024-02-14 19:23:09.175355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.018 [2024-02-14 19:23:09.179437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.018 [2024-02-14 19:23:09.179726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.018 [2024-02-14 19:23:09.179757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.018 [2024-02-14 19:23:09.183416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.018 [2024-02-14 19:23:09.183540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.018 [2024-02-14 19:23:09.183572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.018 [2024-02-14 19:23:09.187567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.018 [2024-02-14 19:23:09.187681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.018 [2024-02-14 19:23:09.187700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.018 [2024-02-14 19:23:09.191588] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.018 [2024-02-14 19:23:09.191670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.018 [2024-02-14 19:23:09.191689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.018 [2024-02-14 19:23:09.195629] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.018 [2024-02-14 19:23:09.195771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.018 [2024-02-14 19:23:09.195791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.018 [2024-02-14 19:23:09.199607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.018 [2024-02-14 19:23:09.199711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.018 [2024-02-14 19:23:09.199731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.018 [2024-02-14 19:23:09.203611] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.018 [2024-02-14 19:23:09.203687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.018 [2024-02-14 19:23:09.203707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.018 [2024-02-14 19:23:09.207629] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.018 [2024-02-14 19:23:09.207778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.018 [2024-02-14 19:23:09.207798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.018 [2024-02-14 19:23:09.211627] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.019 [2024-02-14 19:23:09.211812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-02-14 19:23:09.211830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.019 [2024-02-14 19:23:09.215700] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.019 [2024-02-14 19:23:09.215875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-02-14 19:23:09.215895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.019 [2024-02-14 19:23:09.219721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.019 [2024-02-14 19:23:09.219920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-02-14 19:23:09.219939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.019 [2024-02-14 19:23:09.223908] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.019 [2024-02-14 19:23:09.224020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-02-14 19:23:09.224040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.019 [2024-02-14 19:23:09.227973] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.019 [2024-02-14 19:23:09.228077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-02-14 19:23:09.228097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.019 [2024-02-14 19:23:09.232025] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.019 [2024-02-14 19:23:09.232105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-02-14 19:23:09.232124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.019 [2024-02-14 19:23:09.236020] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.019 [2024-02-14 19:23:09.236115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-02-14 19:23:09.236134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.019 [2024-02-14 19:23:09.240140] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.019 [2024-02-14 19:23:09.240239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-02-14 19:23:09.240258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.019 [2024-02-14 19:23:09.244262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.019 [2024-02-14 19:23:09.244365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-02-14 19:23:09.244384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.019 [2024-02-14 19:23:09.248371] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.019 [2024-02-14 19:23:09.248526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-02-14 19:23:09.248546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.019 [2024-02-14 19:23:09.252464] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.019 [2024-02-14 19:23:09.252611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-02-14 19:23:09.252631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.019 [2024-02-14 19:23:09.256635] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.019 [2024-02-14 19:23:09.256788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-02-14 19:23:09.256808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.019 [2024-02-14 19:23:09.260706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.019 [2024-02-14 19:23:09.260842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-02-14 19:23:09.260861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.019 [2024-02-14 19:23:09.264762] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.019 [2024-02-14 19:23:09.264841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-02-14 19:23:09.264859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.019 [2024-02-14 19:23:09.268810] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.019 [2024-02-14 19:23:09.268972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-02-14 19:23:09.268993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.019 [2024-02-14 19:23:09.272840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.019 [2024-02-14 19:23:09.272950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-02-14 19:23:09.272970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.019 [2024-02-14 19:23:09.276857] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.019 [2024-02-14 19:23:09.276962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-02-14 19:23:09.276981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.019 [2024-02-14 19:23:09.281000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.019 [2024-02-14 19:23:09.281141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-02-14 19:23:09.281162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.019 [2024-02-14 19:23:09.284980] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.019 [2024-02-14 19:23:09.285060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-02-14 19:23:09.285080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.019 [2024-02-14 19:23:09.289077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.019 [2024-02-14 19:23:09.289204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-02-14 19:23:09.289224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.019 [2024-02-14 19:23:09.293156] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.019 [2024-02-14 19:23:09.293260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-02-14 19:23:09.293279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.019 [2024-02-14 19:23:09.297194] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.019 [2024-02-14 19:23:09.297305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-02-14 19:23:09.297324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.019 [2024-02-14 19:23:09.301254] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.019 [2024-02-14 19:23:09.301403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-02-14 19:23:09.301423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.019 [2024-02-14 19:23:09.305284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.019 [2024-02-14 19:23:09.305409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-02-14 19:23:09.305428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.019 [2024-02-14 19:23:09.309280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.019 [2024-02-14 19:23:09.309393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-02-14 19:23:09.309413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.019 [2024-02-14 19:23:09.313365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.019 [2024-02-14 19:23:09.313551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-02-14 19:23:09.313572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.019 [2024-02-14 19:23:09.317401] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.019 [2024-02-14 19:23:09.317480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-02-14 19:23:09.317512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.019 [2024-02-14 19:23:09.321457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.020 [2024-02-14 19:23:09.321616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-02-14 19:23:09.321636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.020 [2024-02-14 19:23:09.325513] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.020 [2024-02-14 19:23:09.325659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-02-14 19:23:09.325680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.020 [2024-02-14 19:23:09.329650] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.020 [2024-02-14 19:23:09.329745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-02-14 19:23:09.329765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.020 [2024-02-14 19:23:09.333686] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.020 [2024-02-14 19:23:09.333839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-02-14 19:23:09.333859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.020 [2024-02-14 19:23:09.337757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.020 [2024-02-14 19:23:09.337893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-02-14 19:23:09.337912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.020 [2024-02-14 19:23:09.341795] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.020 [2024-02-14 19:23:09.341899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-02-14 19:23:09.341918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.020 [2024-02-14 19:23:09.345938] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.020 [2024-02-14 19:23:09.346084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-02-14 19:23:09.346103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.020 [2024-02-14 19:23:09.350031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.020 [2024-02-14 19:23:09.350111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-02-14 19:23:09.350130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.020 [2024-02-14 19:23:09.354129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.020 [2024-02-14 19:23:09.354278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-02-14 19:23:09.354298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.020 [2024-02-14 19:23:09.358182] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.020 [2024-02-14 19:23:09.358285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-02-14 19:23:09.358304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.020 [2024-02-14 19:23:09.362238] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.020 [2024-02-14 19:23:09.362316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-02-14 19:23:09.362335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.020 [2024-02-14 19:23:09.366339] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.020 [2024-02-14 19:23:09.366465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-02-14 19:23:09.366484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.020 [2024-02-14 19:23:09.370436] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.020 [2024-02-14 19:23:09.370528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-02-14 19:23:09.370547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.020 [2024-02-14 19:23:09.374549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.020 [2024-02-14 19:23:09.374684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-02-14 19:23:09.374704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.020 [2024-02-14 19:23:09.378685] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.020 [2024-02-14 19:23:09.378891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-02-14 19:23:09.378938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.020 [2024-02-14 19:23:09.382731] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.020 [2024-02-14 19:23:09.382825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-02-14 19:23:09.382844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.020 [2024-02-14 19:23:09.386821] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.020 [2024-02-14 19:23:09.386978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-02-14 19:23:09.386998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.020 [2024-02-14 19:23:09.390974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.020 [2024-02-14 19:23:09.391090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-02-14 19:23:09.391110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.020 [2024-02-14 19:23:09.395041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.020 [2024-02-14 19:23:09.395141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-02-14 19:23:09.395162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.020 [2024-02-14 19:23:09.399163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.020 [2024-02-14 19:23:09.399332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-02-14 19:23:09.399350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.020 [2024-02-14 19:23:09.403251] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.020 [2024-02-14 19:23:09.403373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-02-14 19:23:09.403391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.020 [2024-02-14 19:23:09.407365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.020 [2024-02-14 19:23:09.407476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-02-14 19:23:09.407495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.020 [2024-02-14 19:23:09.411432] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.020 [2024-02-14 19:23:09.411592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-02-14 19:23:09.411612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.020 [2024-02-14 19:23:09.415520] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.020 [2024-02-14 19:23:09.415647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-02-14 19:23:09.415665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.020 [2024-02-14 19:23:09.419574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.020 [2024-02-14 19:23:09.419722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-02-14 19:23:09.419742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.020 [2024-02-14 19:23:09.423736] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.020 [2024-02-14 19:23:09.423885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-02-14 19:23:09.423904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.020 [2024-02-14 19:23:09.427738] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.020 [2024-02-14 19:23:09.427851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-02-14 19:23:09.427870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.020 [2024-02-14 19:23:09.431885] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.021 [2024-02-14 19:23:09.432080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.021 [2024-02-14 19:23:09.432101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.281 [2024-02-14 19:23:09.436055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.281 [2024-02-14 19:23:09.436154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.281 [2024-02-14 19:23:09.436173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.281 [2024-02-14 19:23:09.440221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.281 [2024-02-14 19:23:09.440379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.281 [2024-02-14 19:23:09.440399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.281 [2024-02-14 19:23:09.444340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.281 [2024-02-14 19:23:09.444468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.281 [2024-02-14 19:23:09.444498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.281 [2024-02-14 19:23:09.448356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.281 [2024-02-14 19:23:09.448435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.281 [2024-02-14 19:23:09.448453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.281 [2024-02-14 19:23:09.452429] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.281 [2024-02-14 19:23:09.452569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.281 [2024-02-14 19:23:09.452588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.281 [2024-02-14 19:23:09.456508] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.281 [2024-02-14 19:23:09.456656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.281 [2024-02-14 19:23:09.456675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.281 [2024-02-14 19:23:09.460595] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.282 [2024-02-14 19:23:09.460696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.282 [2024-02-14 19:23:09.460717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.282 [2024-02-14 19:23:09.464686] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.282 [2024-02-14 19:23:09.464833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.282 [2024-02-14 19:23:09.464853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.282 [2024-02-14 19:23:09.468806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.282 [2024-02-14 19:23:09.468902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.282 [2024-02-14 19:23:09.468923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.282 [2024-02-14 19:23:09.472918] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.282 [2024-02-14 19:23:09.473030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.282 [2024-02-14 19:23:09.473049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.282 [2024-02-14 19:23:09.477040] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.282 [2024-02-14 19:23:09.477181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.282 [2024-02-14 19:23:09.477201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.282 [2024-02-14 19:23:09.481073] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.282 [2024-02-14 19:23:09.481208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.282 [2024-02-14 19:23:09.481228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.282 [2024-02-14 19:23:09.485242] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.282 [2024-02-14 19:23:09.485400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.282 [2024-02-14 19:23:09.485419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.282 [2024-02-14 19:23:09.489392] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.282 [2024-02-14 19:23:09.489525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.282 [2024-02-14 19:23:09.489545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.282 [2024-02-14 19:23:09.493387] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.282 [2024-02-14 19:23:09.493480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.282 [2024-02-14 19:23:09.493511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.282 [2024-02-14 19:23:09.497532] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.282 [2024-02-14 19:23:09.497659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.282 [2024-02-14 19:23:09.497707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.282 [2024-02-14 19:23:09.501615] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.282 [2024-02-14 19:23:09.501719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.282 [2024-02-14 19:23:09.501738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.282 [2024-02-14 19:23:09.505783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.282 [2024-02-14 19:23:09.505910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.282 [2024-02-14 19:23:09.505929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.282 [2024-02-14 19:23:09.509840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.282 [2024-02-14 19:23:09.509974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.282 [2024-02-14 19:23:09.509994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.282 [2024-02-14 19:23:09.513932] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.282 [2024-02-14 19:23:09.514012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.282 [2024-02-14 19:23:09.514031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.282 [2024-02-14 19:23:09.517895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.282 [2024-02-14 19:23:09.518020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.282 [2024-02-14 19:23:09.518040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.282 [2024-02-14 19:23:09.521927] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.282 [2024-02-14 19:23:09.522031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.282 [2024-02-14 19:23:09.522050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.282 [2024-02-14 19:23:09.526028] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.282 [2024-02-14 19:23:09.526107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.282 [2024-02-14 19:23:09.526126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.282 [2024-02-14 19:23:09.530188] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.282 [2024-02-14 19:23:09.530323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.282 [2024-02-14 19:23:09.530343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.282 [2024-02-14 19:23:09.534326] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.282 [2024-02-14 19:23:09.534421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.282 [2024-02-14 19:23:09.534441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.282 [2024-02-14 19:23:09.538393] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.282 [2024-02-14 19:23:09.538543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.282 [2024-02-14 19:23:09.538562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.282 [2024-02-14 19:23:09.542593] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.282 [2024-02-14 19:23:09.542720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.282 [2024-02-14 19:23:09.542740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.282 [2024-02-14 19:23:09.546746] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.282 [2024-02-14 19:23:09.546852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.282 [2024-02-14 19:23:09.546872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.282 [2024-02-14 19:23:09.550867] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.282 [2024-02-14 19:23:09.551079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.282 [2024-02-14 19:23:09.551100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.282 [2024-02-14 19:23:09.555124] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.282 [2024-02-14 19:23:09.555309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.282 [2024-02-14 19:23:09.555344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.282 [2024-02-14 19:23:09.559275] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.282 [2024-02-14 19:23:09.559407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.282 [2024-02-14 19:23:09.559426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.282 [2024-02-14 19:23:09.563456] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.282 [2024-02-14 19:23:09.563608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.282 [2024-02-14 19:23:09.563629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.282 [2024-02-14 19:23:09.567544] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.282 [2024-02-14 19:23:09.567655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.282 [2024-02-14 19:23:09.567675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.282 [2024-02-14 19:23:09.571676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.283 [2024-02-14 19:23:09.571795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.283 [2024-02-14 19:23:09.571817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.283 [2024-02-14 19:23:09.575756] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.283 [2024-02-14 19:23:09.575884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.283 [2024-02-14 19:23:09.575903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.283 [2024-02-14 19:23:09.579896] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.283 [2024-02-14 19:23:09.580037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.283 [2024-02-14 19:23:09.580056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.283 [2024-02-14 19:23:09.583993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.283 [2024-02-14 19:23:09.584144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.283 [2024-02-14 19:23:09.584163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.283 [2024-02-14 19:23:09.588100] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.283 [2024-02-14 19:23:09.588205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.283 [2024-02-14 19:23:09.588225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.283 [2024-02-14 19:23:09.592104] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.283 [2024-02-14 19:23:09.592182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.283 [2024-02-14 19:23:09.592201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.283 [2024-02-14 19:23:09.596269] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.283 [2024-02-14 19:23:09.596412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.283 [2024-02-14 19:23:09.596432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.283 [2024-02-14 19:23:09.600330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.283 [2024-02-14 19:23:09.600427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.283 [2024-02-14 19:23:09.600447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.283 [2024-02-14 19:23:09.604348] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.283 [2024-02-14 19:23:09.604483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.283 [2024-02-14 19:23:09.604517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.283 [2024-02-14 19:23:09.608459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.283 [2024-02-14 19:23:09.608616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.283 [2024-02-14 19:23:09.608637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.283 [2024-02-14 19:23:09.612483] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.283 [2024-02-14 19:23:09.612640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.283 [2024-02-14 19:23:09.612660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.283 [2024-02-14 19:23:09.616596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.283 [2024-02-14 19:23:09.616748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.283 [2024-02-14 19:23:09.616767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.283 [2024-02-14 19:23:09.620615] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.283 [2024-02-14 19:23:09.620719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.283 [2024-02-14 19:23:09.620739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.283 [2024-02-14 19:23:09.624576] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.283 [2024-02-14 19:23:09.624655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.283 [2024-02-14 19:23:09.624673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.283 [2024-02-14 19:23:09.628614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.283 [2024-02-14 19:23:09.628749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.283 [2024-02-14 19:23:09.628768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.283 [2024-02-14 19:23:09.632664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.283 [2024-02-14 19:23:09.632781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.283 [2024-02-14 19:23:09.632801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.283 [2024-02-14 19:23:09.636649] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.283 [2024-02-14 19:23:09.636785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.283 [2024-02-14 19:23:09.636805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.283 [2024-02-14 19:23:09.640783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.283 [2024-02-14 19:23:09.640911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.283 [2024-02-14 19:23:09.640930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.283 [2024-02-14 19:23:09.644841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.283 [2024-02-14 19:23:09.644921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.283 [2024-02-14 19:23:09.644940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.283 [2024-02-14 19:23:09.648954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.283 [2024-02-14 19:23:09.649109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.283 [2024-02-14 19:23:09.649128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.283 [2024-02-14 19:23:09.652940] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.283 [2024-02-14 19:23:09.653044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.283 [2024-02-14 19:23:09.653063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.283 [2024-02-14 19:23:09.657004] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.283 [2024-02-14 19:23:09.657082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.283 [2024-02-14 19:23:09.657101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.283 [2024-02-14 19:23:09.661111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.283 [2024-02-14 19:23:09.661237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.283 [2024-02-14 19:23:09.661256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.283 [2024-02-14 19:23:09.665143] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.283 [2024-02-14 19:23:09.665239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.283 [2024-02-14 19:23:09.665258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.283 [2024-02-14 19:23:09.669211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.283 [2024-02-14 19:23:09.669332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.283 [2024-02-14 19:23:09.669351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.283 [2024-02-14 19:23:09.673363] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.283 [2024-02-14 19:23:09.673509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.283 [2024-02-14 19:23:09.673528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.283 [2024-02-14 19:23:09.677388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.283 [2024-02-14 19:23:09.677508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.283 [2024-02-14 19:23:09.677528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.284 [2024-02-14 19:23:09.681442] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.284 [2024-02-14 19:23:09.681642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.284 [2024-02-14 19:23:09.681678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.284 [2024-02-14 19:23:09.685537] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.284 [2024-02-14 19:23:09.685674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.284 [2024-02-14 19:23:09.685694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.284 [2024-02-14 19:23:09.689553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.284 [2024-02-14 19:23:09.689681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.284 [2024-02-14 19:23:09.689701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.284 [2024-02-14 19:23:09.693606] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.284 [2024-02-14 19:23:09.693769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.284 [2024-02-14 19:23:09.693788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.543 [2024-02-14 19:23:09.697770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.543 [2024-02-14 19:23:09.697897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.543 [2024-02-14 19:23:09.697917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.543 [2024-02-14 19:23:09.701828] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.543 [2024-02-14 19:23:09.701932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.543 [2024-02-14 19:23:09.701951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.543 [2024-02-14 19:23:09.705987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.543 [2024-02-14 19:23:09.706121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.543 [2024-02-14 19:23:09.706140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.543 [2024-02-14 19:23:09.710075] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.543 [2024-02-14 19:23:09.710174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.543 [2024-02-14 19:23:09.710193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.543 [2024-02-14 19:23:09.714126] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.543 [2024-02-14 19:23:09.714259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.543 [2024-02-14 19:23:09.714278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.543 [2024-02-14 19:23:09.718233] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.543 [2024-02-14 19:23:09.718338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.543 [2024-02-14 19:23:09.718358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.543 [2024-02-14 19:23:09.722352] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.543 [2024-02-14 19:23:09.722432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.543 [2024-02-14 19:23:09.722452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.543 [2024-02-14 19:23:09.726425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.543 [2024-02-14 19:23:09.726565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.543 [2024-02-14 19:23:09.726585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.544 [2024-02-14 19:23:09.730445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.544 [2024-02-14 19:23:09.730553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.544 [2024-02-14 19:23:09.730572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.544 [2024-02-14 19:23:09.734460] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.544 [2024-02-14 19:23:09.734585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.544 [2024-02-14 19:23:09.734605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.544 [2024-02-14 19:23:09.738617] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.544 [2024-02-14 19:23:09.738744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.544 [2024-02-14 19:23:09.738764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.544 [2024-02-14 19:23:09.742618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.544 [2024-02-14 19:23:09.742723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.544 [2024-02-14 19:23:09.742742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.544 [2024-02-14 19:23:09.746726] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.544 [2024-02-14 19:23:09.746861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.544 [2024-02-14 19:23:09.746881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.544 [2024-02-14 19:23:09.750802] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.544 [2024-02-14 19:23:09.750941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.544 [2024-02-14 19:23:09.750961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.544 [2024-02-14 19:23:09.754875] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.544 [2024-02-14 19:23:09.754990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.544 [2024-02-14 19:23:09.755010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.544 [2024-02-14 19:23:09.759043] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.544 [2024-02-14 19:23:09.759187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.544 [2024-02-14 19:23:09.759207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.544 [2024-02-14 19:23:09.763170] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.544 [2024-02-14 19:23:09.763309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.544 [2024-02-14 19:23:09.763345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.544 [2024-02-14 19:23:09.767332] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.544 [2024-02-14 19:23:09.767453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.544 [2024-02-14 19:23:09.767473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.544 [2024-02-14 19:23:09.771376] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.544 [2024-02-14 19:23:09.771548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.544 [2024-02-14 19:23:09.771568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.544 [2024-02-14 19:23:09.775464] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.544 [2024-02-14 19:23:09.775601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.544 [2024-02-14 19:23:09.775621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.544 [2024-02-14 19:23:09.779558] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.544 [2024-02-14 19:23:09.779699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.544 [2024-02-14 19:23:09.779719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.544 [2024-02-14 19:23:09.783601] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.544 [2024-02-14 19:23:09.783729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.544 [2024-02-14 19:23:09.783749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.544 [2024-02-14 19:23:09.787756] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.544 [2024-02-14 19:23:09.787865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.544 [2024-02-14 19:23:09.787884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.544 [2024-02-14 19:23:09.791804] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.544 [2024-02-14 19:23:09.791945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.544 [2024-02-14 19:23:09.791964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.544 [2024-02-14 19:23:09.795896] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.544 [2024-02-14 19:23:09.796005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.544 [2024-02-14 19:23:09.796024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.544 [2024-02-14 19:23:09.799960] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.544 [2024-02-14 19:23:09.800093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.544 [2024-02-14 19:23:09.800112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.544 [2024-02-14 19:23:09.804139] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bfc0) with pdu=0x2000190fef90 00:22:32.544 [2024-02-14 19:23:09.804260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.544 [2024-02-14 19:23:09.804280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.544 00:22:32.544 Latency(us) 00:22:32.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.544 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:32.544 nvme0n1 : 2.00 7564.16 945.52 0.00 0.00 2110.75 1668.19 11439.01 00:22:32.544 =================================================================================================================== 00:22:32.544 Total : 7564.16 945.52 0.00 0.00 2110.75 1668.19 11439.01 00:22:32.544 0 00:22:32.544 19:23:09 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:32.544 19:23:09 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:32.544 | .driver_specific 00:22:32.544 | .nvme_error 00:22:32.544 | .status_code 00:22:32.544 | .command_transient_transport_error' 00:22:32.544 19:23:09 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:32.544 19:23:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:32.804 19:23:10 -- host/digest.sh@71 -- # (( 488 > 0 )) 00:22:32.804 19:23:10 -- host/digest.sh@73 -- # killprocess 85607 00:22:32.804 19:23:10 -- common/autotest_common.sh@924 -- # '[' -z 85607 ']' 00:22:32.804 19:23:10 -- common/autotest_common.sh@928 -- # kill -0 85607 00:22:32.804 19:23:10 -- common/autotest_common.sh@929 -- # uname 00:22:32.804 19:23:10 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:22:32.804 19:23:10 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 85607 00:22:32.804 killing process with pid 85607 00:22:32.804 Received shutdown signal, test time was about 2.000000 seconds 00:22:32.804 00:22:32.804 Latency(us) 00:22:32.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.804 =================================================================================================================== 00:22:32.804 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:32.804 19:23:10 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:22:32.804 19:23:10 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:22:32.804 19:23:10 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 85607' 00:22:32.804 19:23:10 -- common/autotest_common.sh@943 -- # kill 85607 00:22:32.804 19:23:10 -- common/autotest_common.sh@948 -- # wait 85607 00:22:33.063 19:23:10 -- host/digest.sh@115 -- # killprocess 85301 00:22:33.063 19:23:10 -- common/autotest_common.sh@924 -- # '[' -z 85301 ']' 00:22:33.063 19:23:10 -- common/autotest_common.sh@928 -- # kill -0 85301 00:22:33.063 19:23:10 -- common/autotest_common.sh@929 -- # uname 00:22:33.063 19:23:10 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:22:33.063 19:23:10 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 85301 00:22:33.063 killing process with pid 85301 00:22:33.063 19:23:10 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:22:33.063 19:23:10 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:22:33.063 19:23:10 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 85301' 00:22:33.063 19:23:10 -- common/autotest_common.sh@943 -- # kill 85301 00:22:33.063 19:23:10 -- common/autotest_common.sh@948 -- # wait 85301 00:22:33.322 ************************************ 00:22:33.322 END TEST nvmf_digest_error 00:22:33.322 ************************************ 00:22:33.322 00:22:33.322 real 0m17.698s 00:22:33.322 user 0m31.680s 00:22:33.322 sys 0m5.412s 00:22:33.322 19:23:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:33.322 19:23:10 -- common/autotest_common.sh@10 -- # set +x 00:22:33.322 19:23:10 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:22:33.322 19:23:10 -- host/digest.sh@139 -- # nvmftestfini 00:22:33.322 19:23:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:33.322 19:23:10 -- nvmf/common.sh@116 -- # sync 00:22:33.322 19:23:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:33.322 19:23:10 -- nvmf/common.sh@119 -- # set +e 00:22:33.322 19:23:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:33.322 19:23:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:33.322 rmmod nvme_tcp 00:22:33.322 rmmod nvme_fabrics 00:22:33.581 rmmod nvme_keyring 00:22:33.581 19:23:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:33.581 19:23:10 -- nvmf/common.sh@123 -- # set -e 00:22:33.581 19:23:10 -- nvmf/common.sh@124 -- # return 0 00:22:33.581 19:23:10 -- nvmf/common.sh@477 -- # '[' -n 85301 ']' 00:22:33.581 19:23:10 -- nvmf/common.sh@478 -- # killprocess 85301 00:22:33.581 19:23:10 -- common/autotest_common.sh@924 -- # '[' -z 85301 ']' 00:22:33.581 19:23:10 -- common/autotest_common.sh@928 -- # kill -0 85301 00:22:33.581 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 928: kill: (85301) - No such process 00:22:33.581 Process with pid 85301 is not found 00:22:33.581 19:23:10 -- common/autotest_common.sh@951 -- # echo 'Process with pid 85301 is not found' 00:22:33.581 19:23:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:33.581 19:23:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:33.581 19:23:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:33.581 19:23:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:33.581 19:23:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:33.581 19:23:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.581 19:23:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:33.581 19:23:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.581 19:23:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:33.581 00:22:33.581 real 0m36.610s 00:22:33.581 user 1m4.960s 00:22:33.581 sys 0m10.994s 00:22:33.581 19:23:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:33.581 19:23:10 -- common/autotest_common.sh@10 -- # set +x 00:22:33.581 ************************************ 00:22:33.581 END TEST nvmf_digest 00:22:33.581 ************************************ 00:22:33.581 19:23:10 -- nvmf/nvmf.sh@109 -- # [[ 1 -eq 1 ]] 00:22:33.581 19:23:10 -- nvmf/nvmf.sh@109 -- # [[ tcp == \t\c\p ]] 00:22:33.581 19:23:10 -- nvmf/nvmf.sh@111 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:22:33.581 19:23:10 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:22:33.581 19:23:10 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:22:33.581 19:23:10 -- common/autotest_common.sh@10 -- # set +x 00:22:33.581 ************************************ 00:22:33.581 START TEST nvmf_mdns_discovery 00:22:33.581 ************************************ 00:22:33.581 19:23:10 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:22:33.581 * Looking for test storage... 00:22:33.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:33.581 19:23:10 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:33.581 19:23:10 -- nvmf/common.sh@7 -- # uname -s 00:22:33.581 19:23:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:33.581 19:23:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:33.581 19:23:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:33.581 19:23:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:33.581 19:23:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:33.581 19:23:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:33.581 19:23:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:33.581 19:23:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:33.581 19:23:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:33.581 19:23:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:33.581 19:23:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:22:33.581 19:23:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:22:33.581 19:23:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:33.581 19:23:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:33.581 19:23:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:33.581 19:23:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:33.581 19:23:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:33.581 19:23:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:33.581 19:23:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:33.581 19:23:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.581 19:23:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.582 19:23:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.582 19:23:10 -- paths/export.sh@5 -- # export PATH 00:22:33.582 19:23:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.582 19:23:10 -- nvmf/common.sh@46 -- # : 0 00:22:33.582 19:23:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:33.582 19:23:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:33.582 19:23:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:33.582 19:23:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:33.582 19:23:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:33.582 19:23:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:33.582 19:23:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:33.582 19:23:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:33.582 19:23:10 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:22:33.582 19:23:10 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:22:33.582 19:23:10 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:33.582 19:23:10 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:33.582 19:23:10 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:22:33.582 19:23:10 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:33.582 19:23:10 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:22:33.582 19:23:10 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:22:33.582 19:23:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:33.582 19:23:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:33.582 19:23:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:33.582 19:23:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:33.582 19:23:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:33.582 19:23:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.582 19:23:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:33.582 19:23:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.841 19:23:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:33.841 19:23:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:33.841 19:23:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:33.841 19:23:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:33.841 19:23:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:33.841 19:23:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:33.841 19:23:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.841 19:23:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.841 19:23:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:33.841 19:23:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:33.841 19:23:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:33.841 19:23:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:33.841 19:23:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:33.841 19:23:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.841 19:23:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:33.841 19:23:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:33.841 19:23:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:33.841 19:23:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:33.841 19:23:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:33.841 19:23:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:33.841 Cannot find device "nvmf_tgt_br" 00:22:33.841 19:23:11 -- nvmf/common.sh@154 -- # true 00:22:33.841 19:23:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:33.841 Cannot find device "nvmf_tgt_br2" 00:22:33.841 19:23:11 -- nvmf/common.sh@155 -- # true 00:22:33.841 19:23:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:33.841 19:23:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:33.841 Cannot find device "nvmf_tgt_br" 00:22:33.841 19:23:11 -- nvmf/common.sh@157 -- # true 00:22:33.841 19:23:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:33.841 Cannot find device "nvmf_tgt_br2" 00:22:33.841 19:23:11 -- nvmf/common.sh@158 -- # true 00:22:33.841 19:23:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:33.841 19:23:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:33.841 19:23:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:33.841 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:33.841 19:23:11 -- nvmf/common.sh@161 -- # true 00:22:33.841 19:23:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:33.841 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:33.841 19:23:11 -- nvmf/common.sh@162 -- # true 00:22:33.841 19:23:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:33.841 19:23:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:33.841 19:23:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:33.841 19:23:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:33.841 19:23:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:33.841 19:23:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:33.841 19:23:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:33.841 19:23:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:33.841 19:23:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:33.841 19:23:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:33.841 19:23:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:33.841 19:23:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:33.841 19:23:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:33.841 19:23:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:33.841 19:23:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:33.841 19:23:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:33.841 19:23:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:33.841 19:23:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:33.841 19:23:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:33.841 19:23:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:34.100 19:23:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:34.100 19:23:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:34.100 19:23:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:34.100 19:23:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:34.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:34.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:22:34.100 00:22:34.100 --- 10.0.0.2 ping statistics --- 00:22:34.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.100 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:22:34.100 19:23:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:34.100 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:34.100 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:22:34.100 00:22:34.100 --- 10.0.0.3 ping statistics --- 00:22:34.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.100 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:22:34.100 19:23:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:34.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:34.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:22:34.100 00:22:34.100 --- 10.0.0.1 ping statistics --- 00:22:34.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.100 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:22:34.100 19:23:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:34.100 19:23:11 -- nvmf/common.sh@421 -- # return 0 00:22:34.101 19:23:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:34.101 19:23:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:34.101 19:23:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:34.101 19:23:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:34.101 19:23:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:34.101 19:23:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:34.101 19:23:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:34.101 19:23:11 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:34.101 19:23:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:34.101 19:23:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:34.101 19:23:11 -- common/autotest_common.sh@10 -- # set +x 00:22:34.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.101 19:23:11 -- nvmf/common.sh@469 -- # nvmfpid=85905 00:22:34.101 19:23:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:34.101 19:23:11 -- nvmf/common.sh@470 -- # waitforlisten 85905 00:22:34.101 19:23:11 -- common/autotest_common.sh@817 -- # '[' -z 85905 ']' 00:22:34.101 19:23:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.101 19:23:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:34.101 19:23:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.101 19:23:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:34.101 19:23:11 -- common/autotest_common.sh@10 -- # set +x 00:22:34.101 [2024-02-14 19:23:11.385594] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:22:34.101 [2024-02-14 19:23:11.385700] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.360 [2024-02-14 19:23:11.521083] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.360 [2024-02-14 19:23:11.590779] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:34.360 [2024-02-14 19:23:11.590953] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.360 [2024-02-14 19:23:11.590967] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.360 [2024-02-14 19:23:11.590975] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.360 [2024-02-14 19:23:11.591005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.928 19:23:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:34.929 19:23:12 -- common/autotest_common.sh@850 -- # return 0 00:22:34.929 19:23:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:34.929 19:23:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:34.929 19:23:12 -- common/autotest_common.sh@10 -- # set +x 00:22:34.929 19:23:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.929 19:23:12 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:22:34.929 19:23:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.929 19:23:12 -- common/autotest_common.sh@10 -- # set +x 00:22:34.929 19:23:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.929 19:23:12 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:22:34.929 19:23:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.929 19:23:12 -- common/autotest_common.sh@10 -- # set +x 00:22:35.188 19:23:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.188 19:23:12 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:35.188 19:23:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.188 19:23:12 -- common/autotest_common.sh@10 -- # set +x 00:22:35.188 [2024-02-14 19:23:12.416194] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.188 19:23:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.188 19:23:12 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:35.188 19:23:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.188 19:23:12 -- common/autotest_common.sh@10 -- # set +x 00:22:35.188 [2024-02-14 19:23:12.424286] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:35.188 19:23:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.188 19:23:12 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:35.188 19:23:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.188 19:23:12 -- common/autotest_common.sh@10 -- # set +x 00:22:35.188 null0 00:22:35.188 19:23:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.188 19:23:12 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:35.188 19:23:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.188 19:23:12 -- common/autotest_common.sh@10 -- # set +x 00:22:35.188 null1 00:22:35.188 19:23:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.188 19:23:12 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:22:35.188 19:23:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.188 19:23:12 -- common/autotest_common.sh@10 -- # set +x 00:22:35.188 null2 00:22:35.188 19:23:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.188 19:23:12 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:22:35.188 19:23:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.188 19:23:12 -- common/autotest_common.sh@10 -- # set +x 00:22:35.188 null3 00:22:35.188 19:23:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.188 19:23:12 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:22:35.188 19:23:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.188 19:23:12 -- common/autotest_common.sh@10 -- # set +x 00:22:35.188 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:35.188 19:23:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.188 19:23:12 -- host/mdns_discovery.sh@47 -- # hostpid=85955 00:22:35.188 19:23:12 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:35.188 19:23:12 -- host/mdns_discovery.sh@48 -- # waitforlisten 85955 /tmp/host.sock 00:22:35.188 19:23:12 -- common/autotest_common.sh@817 -- # '[' -z 85955 ']' 00:22:35.188 19:23:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:22:35.188 19:23:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:35.188 19:23:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:35.188 19:23:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:35.188 19:23:12 -- common/autotest_common.sh@10 -- # set +x 00:22:35.188 [2024-02-14 19:23:12.512260] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:22:35.188 [2024-02-14 19:23:12.512468] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85955 ] 00:22:35.447 [2024-02-14 19:23:12.641756] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.447 [2024-02-14 19:23:12.723835] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:35.447 [2024-02-14 19:23:12.724305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.392 19:23:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:36.392 19:23:13 -- common/autotest_common.sh@850 -- # return 0 00:22:36.392 19:23:13 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:22:36.392 19:23:13 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:22:36.392 19:23:13 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:22:36.392 19:23:13 -- host/mdns_discovery.sh@57 -- # avahipid=85980 00:22:36.393 19:23:13 -- host/mdns_discovery.sh@58 -- # sleep 1 00:22:36.393 19:23:13 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:22:36.393 19:23:13 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:22:36.393 Process 1001 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:22:36.393 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:22:36.393 Successfully dropped root privileges. 00:22:36.393 avahi-daemon 0.8 starting up. 00:22:36.393 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:22:36.393 Successfully called chroot(). 00:22:36.393 Successfully dropped remaining capabilities. 00:22:37.329 No service file found in /etc/avahi/services. 00:22:37.329 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:22:37.329 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:22:37.329 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:22:37.329 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:22:37.329 Network interface enumeration completed. 00:22:37.329 Registering new address record for fe80::b861:3dff:fef2:9f8a on nvmf_tgt_if2.*. 00:22:37.329 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:22:37.329 Registering new address record for fe80::98ea:d9ff:fed7:19e5 on nvmf_tgt_if.*. 00:22:37.329 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:22:37.329 Server startup complete. Host name is fedora38-cloud-1705279005-2131.local. Local service cookie is 2187733153. 00:22:37.329 19:23:14 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:37.329 19:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:37.329 19:23:14 -- common/autotest_common.sh@10 -- # set +x 00:22:37.329 19:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:37.329 19:23:14 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:37.329 19:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:37.329 19:23:14 -- common/autotest_common.sh@10 -- # set +x 00:22:37.329 19:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:37.329 19:23:14 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:22:37.329 19:23:14 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:22:37.329 19:23:14 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:37.329 19:23:14 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:37.329 19:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:37.329 19:23:14 -- host/mdns_discovery.sh@68 -- # sort 00:22:37.329 19:23:14 -- common/autotest_common.sh@10 -- # set +x 00:22:37.329 19:23:14 -- host/mdns_discovery.sh@68 -- # xargs 00:22:37.329 19:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:37.329 19:23:14 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:22:37.329 19:23:14 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:22:37.329 19:23:14 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:37.329 19:23:14 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:37.329 19:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:37.329 19:23:14 -- host/mdns_discovery.sh@64 -- # sort 00:22:37.329 19:23:14 -- common/autotest_common.sh@10 -- # set +x 00:22:37.329 19:23:14 -- host/mdns_discovery.sh@64 -- # xargs 00:22:37.329 19:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:37.588 19:23:14 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:22:37.588 19:23:14 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:37.588 19:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:37.588 19:23:14 -- common/autotest_common.sh@10 -- # set +x 00:22:37.588 19:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:37.588 19:23:14 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:22:37.588 19:23:14 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:37.588 19:23:14 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:37.588 19:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:37.588 19:23:14 -- host/mdns_discovery.sh@68 -- # sort 00:22:37.588 19:23:14 -- common/autotest_common.sh@10 -- # set +x 00:22:37.588 19:23:14 -- host/mdns_discovery.sh@68 -- # xargs 00:22:37.588 19:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:37.588 19:23:14 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:22:37.588 19:23:14 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:22:37.588 19:23:14 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:37.588 19:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:37.588 19:23:14 -- common/autotest_common.sh@10 -- # set +x 00:22:37.588 19:23:14 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:37.588 19:23:14 -- host/mdns_discovery.sh@64 -- # sort 00:22:37.588 19:23:14 -- host/mdns_discovery.sh@64 -- # xargs 00:22:37.588 19:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:37.588 19:23:14 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:22:37.588 19:23:14 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:37.588 19:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:37.588 19:23:14 -- common/autotest_common.sh@10 -- # set +x 00:22:37.588 19:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:37.588 19:23:14 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:22:37.588 19:23:14 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:37.588 19:23:14 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:37.588 19:23:14 -- host/mdns_discovery.sh@68 -- # xargs 00:22:37.588 19:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:37.588 19:23:14 -- host/mdns_discovery.sh@68 -- # sort 00:22:37.589 19:23:14 -- common/autotest_common.sh@10 -- # set +x 00:22:37.589 19:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:37.589 19:23:14 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:22:37.589 19:23:14 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:22:37.589 19:23:14 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:37.589 19:23:14 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:37.589 19:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:37.589 19:23:14 -- common/autotest_common.sh@10 -- # set +x 00:22:37.589 19:23:14 -- host/mdns_discovery.sh@64 -- # sort 00:22:37.589 19:23:14 -- host/mdns_discovery.sh@64 -- # xargs 00:22:37.589 [2024-02-14 19:23:14.947050] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:22:37.589 19:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:37.589 19:23:14 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:22:37.589 19:23:14 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:37.589 19:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:37.589 19:23:14 -- common/autotest_common.sh@10 -- # set +x 00:22:37.589 [2024-02-14 19:23:14.993159] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:37.589 19:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:37.589 19:23:14 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:37.589 19:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:37.589 19:23:14 -- common/autotest_common.sh@10 -- # set +x 00:22:37.848 19:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:37.848 19:23:15 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:22:37.848 19:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:37.848 19:23:15 -- common/autotest_common.sh@10 -- # set +x 00:22:37.848 19:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:37.848 19:23:15 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:22:37.848 19:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:37.848 19:23:15 -- common/autotest_common.sh@10 -- # set +x 00:22:37.848 19:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:37.848 19:23:15 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:22:37.848 19:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:37.848 19:23:15 -- common/autotest_common.sh@10 -- # set +x 00:22:37.848 19:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:37.848 19:23:15 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:22:37.848 19:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:37.848 19:23:15 -- common/autotest_common.sh@10 -- # set +x 00:22:37.848 [2024-02-14 19:23:15.033138] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:22:37.848 19:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:37.848 19:23:15 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:22:37.848 19:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:37.848 19:23:15 -- common/autotest_common.sh@10 -- # set +x 00:22:37.848 [2024-02-14 19:23:15.041125] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:37.848 19:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:37.848 19:23:15 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:22:37.848 19:23:15 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=86035 00:22:37.848 19:23:15 -- host/mdns_discovery.sh@125 -- # sleep 5 00:22:38.784 [2024-02-14 19:23:15.847049] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:22:38.784 Established under name 'CDC' 00:22:39.043 [2024-02-14 19:23:16.247061] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:22:39.043 [2024-02-14 19:23:16.247082] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:22:39.043 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:22:39.043 cookie is 0 00:22:39.043 is_local: 1 00:22:39.043 our_own: 0 00:22:39.043 wide_area: 0 00:22:39.043 multicast: 1 00:22:39.043 cached: 1 00:22:39.043 [2024-02-14 19:23:16.347055] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:22:39.043 [2024-02-14 19:23:16.347076] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:22:39.043 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:22:39.043 cookie is 0 00:22:39.043 is_local: 1 00:22:39.043 our_own: 0 00:22:39.043 wide_area: 0 00:22:39.043 multicast: 1 00:22:39.043 cached: 1 00:22:39.979 [2024-02-14 19:23:17.260302] bdev_nvme.c:6704:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:39.979 [2024-02-14 19:23:17.260322] bdev_nvme.c:6784:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:39.979 [2024-02-14 19:23:17.260338] bdev_nvme.c:6667:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:39.979 [2024-02-14 19:23:17.346392] bdev_nvme.c:6633:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:22:39.979 [2024-02-14 19:23:17.360120] bdev_nvme.c:6704:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:39.979 [2024-02-14 19:23:17.360138] bdev_nvme.c:6784:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:39.979 [2024-02-14 19:23:17.360152] bdev_nvme.c:6667:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:40.238 [2024-02-14 19:23:17.412577] bdev_nvme.c:6523:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:22:40.238 [2024-02-14 19:23:17.412608] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:22:40.238 [2024-02-14 19:23:17.445786] bdev_nvme.c:6633:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:22:40.238 [2024-02-14 19:23:17.500311] bdev_nvme.c:6523:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:22:40.238 [2024-02-14 19:23:17.500336] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:42.807 19:23:20 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:22:42.807 19:23:20 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:22:42.807 19:23:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:42.807 19:23:20 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:22:42.807 19:23:20 -- common/autotest_common.sh@10 -- # set +x 00:22:42.807 19:23:20 -- host/mdns_discovery.sh@80 -- # sort 00:22:42.807 19:23:20 -- host/mdns_discovery.sh@80 -- # xargs 00:22:42.807 19:23:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:42.807 19:23:20 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:22:42.807 19:23:20 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:22:42.807 19:23:20 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:42.807 19:23:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:42.807 19:23:20 -- common/autotest_common.sh@10 -- # set +x 00:22:42.807 19:23:20 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:22:42.807 19:23:20 -- host/mdns_discovery.sh@76 -- # sort 00:22:42.807 19:23:20 -- host/mdns_discovery.sh@76 -- # xargs 00:22:42.807 19:23:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:42.807 19:23:20 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:22:42.807 19:23:20 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:22:42.807 19:23:20 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:42.807 19:23:20 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:42.807 19:23:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:42.807 19:23:20 -- common/autotest_common.sh@10 -- # set +x 00:22:42.807 19:23:20 -- host/mdns_discovery.sh@68 -- # sort 00:22:42.807 19:23:20 -- host/mdns_discovery.sh@68 -- # xargs 00:22:42.807 19:23:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:42.808 19:23:20 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:22:42.808 19:23:20 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:22:42.808 19:23:20 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:42.808 19:23:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:42.808 19:23:20 -- common/autotest_common.sh@10 -- # set +x 00:22:42.808 19:23:20 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:42.808 19:23:20 -- host/mdns_discovery.sh@64 -- # sort 00:22:42.808 19:23:20 -- host/mdns_discovery.sh@64 -- # xargs 00:22:43.066 19:23:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.066 19:23:20 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:22:43.066 19:23:20 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:22:43.066 19:23:20 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:22:43.066 19:23:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.066 19:23:20 -- common/autotest_common.sh@10 -- # set +x 00:22:43.066 19:23:20 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:43.066 19:23:20 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:43.066 19:23:20 -- host/mdns_discovery.sh@72 -- # xargs 00:22:43.066 19:23:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.066 19:23:20 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:22:43.066 19:23:20 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:22:43.066 19:23:20 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:22:43.066 19:23:20 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:43.066 19:23:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.066 19:23:20 -- common/autotest_common.sh@10 -- # set +x 00:22:43.066 19:23:20 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:43.066 19:23:20 -- host/mdns_discovery.sh@72 -- # xargs 00:22:43.066 19:23:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.066 19:23:20 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:22:43.066 19:23:20 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:22:43.066 19:23:20 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:43.066 19:23:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.066 19:23:20 -- common/autotest_common.sh@10 -- # set +x 00:22:43.066 19:23:20 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:43.066 19:23:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.066 19:23:20 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:22:43.066 19:23:20 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:22:43.066 19:23:20 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:22:43.066 19:23:20 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:43.066 19:23:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.066 19:23:20 -- common/autotest_common.sh@10 -- # set +x 00:22:43.066 19:23:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.066 19:23:20 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:22:43.066 19:23:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.066 19:23:20 -- common/autotest_common.sh@10 -- # set +x 00:22:43.066 19:23:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.066 19:23:20 -- host/mdns_discovery.sh@139 -- # sleep 1 00:22:44.442 19:23:21 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:22:44.442 19:23:21 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:44.442 19:23:21 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:44.442 19:23:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.442 19:23:21 -- common/autotest_common.sh@10 -- # set +x 00:22:44.442 19:23:21 -- host/mdns_discovery.sh@64 -- # xargs 00:22:44.442 19:23:21 -- host/mdns_discovery.sh@64 -- # sort 00:22:44.442 19:23:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.442 19:23:21 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:44.442 19:23:21 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:22:44.442 19:23:21 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:44.442 19:23:21 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:44.442 19:23:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.442 19:23:21 -- common/autotest_common.sh@10 -- # set +x 00:22:44.442 19:23:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.442 19:23:21 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:22:44.442 19:23:21 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:22:44.442 19:23:21 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:22:44.442 19:23:21 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:44.442 19:23:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.442 19:23:21 -- common/autotest_common.sh@10 -- # set +x 00:22:44.442 [2024-02-14 19:23:21.547790] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:44.442 [2024-02-14 19:23:21.547984] bdev_nvme.c:6686:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:44.442 [2024-02-14 19:23:21.548010] bdev_nvme.c:6667:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:44.442 [2024-02-14 19:23:21.548966] bdev_nvme.c:6686:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:44.442 [2024-02-14 19:23:21.548986] bdev_nvme.c:6667:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:44.442 19:23:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.442 19:23:21 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:22:44.442 19:23:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.442 19:23:21 -- common/autotest_common.sh@10 -- # set +x 00:22:44.442 [2024-02-14 19:23:21.555733] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:44.442 [2024-02-14 19:23:21.555985] bdev_nvme.c:6686:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:44.442 [2024-02-14 19:23:21.556028] bdev_nvme.c:6686:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:44.442 19:23:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.442 19:23:21 -- host/mdns_discovery.sh@149 -- # sleep 1 00:22:44.442 [2024-02-14 19:23:21.687066] bdev_nvme.c:6628:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:22:44.442 [2024-02-14 19:23:21.687204] bdev_nvme.c:6628:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:22:44.443 [2024-02-14 19:23:21.744249] bdev_nvme.c:6523:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:22:44.443 [2024-02-14 19:23:21.744270] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:44.443 [2024-02-14 19:23:21.744275] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:44.443 [2024-02-14 19:23:21.744290] bdev_nvme.c:6667:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:44.443 [2024-02-14 19:23:21.744348] bdev_nvme.c:6523:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:22:44.443 [2024-02-14 19:23:21.744355] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:22:44.443 [2024-02-14 19:23:21.744360] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:44.443 [2024-02-14 19:23:21.744371] bdev_nvme.c:6667:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:44.443 [2024-02-14 19:23:21.790151] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:44.443 [2024-02-14 19:23:21.790169] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:44.443 [2024-02-14 19:23:21.790203] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:22:44.443 [2024-02-14 19:23:21.790211] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:45.380 19:23:22 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:22:45.380 19:23:22 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:45.380 19:23:22 -- host/mdns_discovery.sh@68 -- # sort 00:22:45.380 19:23:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.380 19:23:22 -- host/mdns_discovery.sh@68 -- # xargs 00:22:45.380 19:23:22 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:45.380 19:23:22 -- common/autotest_common.sh@10 -- # set +x 00:22:45.380 19:23:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.380 19:23:22 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:22:45.380 19:23:22 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:22:45.380 19:23:22 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:45.380 19:23:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.380 19:23:22 -- common/autotest_common.sh@10 -- # set +x 00:22:45.380 19:23:22 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:45.380 19:23:22 -- host/mdns_discovery.sh@64 -- # sort 00:22:45.380 19:23:22 -- host/mdns_discovery.sh@64 -- # xargs 00:22:45.380 19:23:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.380 19:23:22 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:45.380 19:23:22 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:22:45.380 19:23:22 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:45.380 19:23:22 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:22:45.380 19:23:22 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:45.380 19:23:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.380 19:23:22 -- common/autotest_common.sh@10 -- # set +x 00:22:45.380 19:23:22 -- host/mdns_discovery.sh@72 -- # xargs 00:22:45.380 19:23:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.380 19:23:22 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:45.380 19:23:22 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:22:45.380 19:23:22 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:22:45.380 19:23:22 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:45.380 19:23:22 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:45.380 19:23:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.380 19:23:22 -- common/autotest_common.sh@10 -- # set +x 00:22:45.380 19:23:22 -- host/mdns_discovery.sh@72 -- # xargs 00:22:45.380 19:23:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.642 19:23:22 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:45.642 19:23:22 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:22:45.642 19:23:22 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:22:45.642 19:23:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.642 19:23:22 -- common/autotest_common.sh@10 -- # set +x 00:22:45.642 19:23:22 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:45.642 19:23:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.642 19:23:22 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:22:45.642 19:23:22 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:22:45.642 19:23:22 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:22:45.642 19:23:22 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:45.642 19:23:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.642 19:23:22 -- common/autotest_common.sh@10 -- # set +x 00:22:45.642 [2024-02-14 19:23:22.868982] bdev_nvme.c:6686:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:45.642 [2024-02-14 19:23:22.869009] bdev_nvme.c:6667:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:45.642 [2024-02-14 19:23:22.869039] bdev_nvme.c:6686:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:45.642 [2024-02-14 19:23:22.869050] bdev_nvme.c:6667:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:45.642 [2024-02-14 19:23:22.869899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.642 [2024-02-14 19:23:22.869929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.642 [2024-02-14 19:23:22.869941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.642 [2024-02-14 19:23:22.869949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.642 [2024-02-14 19:23:22.869958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.642 [2024-02-14 19:23:22.869965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.642 [2024-02-14 19:23:22.869974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.642 [2024-02-14 19:23:22.869981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.642 [2024-02-14 19:23:22.869990] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f960 is same with the state(5) to be set 00:22:45.642 19:23:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.642 19:23:22 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:22:45.642 19:23:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.642 19:23:22 -- common/autotest_common.sh@10 -- # set +x 00:22:45.642 [2024-02-14 19:23:22.877049] bdev_nvme.c:6686:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:45.642 [2024-02-14 19:23:22.877109] bdev_nvme.c:6686:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:45.642 [2024-02-14 19:23:22.879820] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237f960 (9): Bad file descriptor 00:22:45.642 19:23:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.642 19:23:22 -- host/mdns_discovery.sh@162 -- # sleep 1 00:22:45.642 [2024-02-14 19:23:22.882768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.642 [2024-02-14 19:23:22.882794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.642 [2024-02-14 19:23:22.882805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.642 [2024-02-14 19:23:22.882812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.642 [2024-02-14 19:23:22.882822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.642 [2024-02-14 19:23:22.882830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.642 [2024-02-14 19:23:22.882839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.642 [2024-02-14 19:23:22.882846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.642 [2024-02-14 19:23:22.882854] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232f090 is same with the state(5) to be set 00:22:45.642 [2024-02-14 19:23:22.889836] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:45.642 [2024-02-14 19:23:22.889937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.642 [2024-02-14 19:23:22.889977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.642 [2024-02-14 19:23:22.889992] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x237f960 with addr=10.0.0.2, port=4420 00:22:45.642 [2024-02-14 19:23:22.890002] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f960 is same with the state(5) to be set 00:22:45.642 [2024-02-14 19:23:22.890017] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237f960 (9): Bad file descriptor 00:22:45.642 [2024-02-14 19:23:22.890030] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:45.642 [2024-02-14 19:23:22.890039] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:45.642 [2024-02-14 19:23:22.890047] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:45.642 [2024-02-14 19:23:22.890066] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:45.642 [2024-02-14 19:23:22.892736] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232f090 (9): Bad file descriptor 00:22:45.642 [2024-02-14 19:23:22.899894] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:45.642 [2024-02-14 19:23:22.899964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.642 [2024-02-14 19:23:22.900001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.642 [2024-02-14 19:23:22.900014] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x237f960 with addr=10.0.0.2, port=4420 00:22:45.642 [2024-02-14 19:23:22.900023] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f960 is same with the state(5) to be set 00:22:45.642 [2024-02-14 19:23:22.900036] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237f960 (9): Bad file descriptor 00:22:45.642 [2024-02-14 19:23:22.900049] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:45.642 [2024-02-14 19:23:22.900056] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:45.642 [2024-02-14 19:23:22.900064] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:45.642 [2024-02-14 19:23:22.900076] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:45.642 [2024-02-14 19:23:22.902745] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:45.642 [2024-02-14 19:23:22.902807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.642 [2024-02-14 19:23:22.902844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.642 [2024-02-14 19:23:22.902857] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x232f090 with addr=10.0.0.3, port=4420 00:22:45.642 [2024-02-14 19:23:22.902866] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232f090 is same with the state(5) to be set 00:22:45.643 [2024-02-14 19:23:22.902879] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232f090 (9): Bad file descriptor 00:22:45.643 [2024-02-14 19:23:22.902891] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:45.643 [2024-02-14 19:23:22.902908] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:45.643 [2024-02-14 19:23:22.902925] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:45.643 [2024-02-14 19:23:22.902937] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:45.643 [2024-02-14 19:23:22.909936] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:45.643 [2024-02-14 19:23:22.910006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.643 [2024-02-14 19:23:22.910044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.643 [2024-02-14 19:23:22.910058] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x237f960 with addr=10.0.0.2, port=4420 00:22:45.643 [2024-02-14 19:23:22.910069] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f960 is same with the state(5) to be set 00:22:45.643 [2024-02-14 19:23:22.910082] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237f960 (9): Bad file descriptor 00:22:45.643 [2024-02-14 19:23:22.910094] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:45.643 [2024-02-14 19:23:22.910101] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:45.643 [2024-02-14 19:23:22.910109] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:45.643 [2024-02-14 19:23:22.910134] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:45.643 [2024-02-14 19:23:22.912783] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:45.643 [2024-02-14 19:23:22.912857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.643 [2024-02-14 19:23:22.912894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.643 [2024-02-14 19:23:22.912908] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x232f090 with addr=10.0.0.3, port=4420 00:22:45.643 [2024-02-14 19:23:22.912917] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232f090 is same with the state(5) to be set 00:22:45.643 [2024-02-14 19:23:22.912930] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232f090 (9): Bad file descriptor 00:22:45.643 [2024-02-14 19:23:22.912942] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:45.643 [2024-02-14 19:23:22.912949] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:45.643 [2024-02-14 19:23:22.912957] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:45.643 [2024-02-14 19:23:22.912969] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:45.643 [2024-02-14 19:23:22.919979] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:45.643 [2024-02-14 19:23:22.920049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.643 [2024-02-14 19:23:22.920086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.643 [2024-02-14 19:23:22.920100] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x237f960 with addr=10.0.0.2, port=4420 00:22:45.643 [2024-02-14 19:23:22.920108] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f960 is same with the state(5) to be set 00:22:45.643 [2024-02-14 19:23:22.920121] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237f960 (9): Bad file descriptor 00:22:45.643 [2024-02-14 19:23:22.920146] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:45.643 [2024-02-14 19:23:22.920155] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:45.643 [2024-02-14 19:23:22.920163] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:45.643 [2024-02-14 19:23:22.920174] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:45.643 [2024-02-14 19:23:22.922826] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:45.643 [2024-02-14 19:23:22.922921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.643 [2024-02-14 19:23:22.922962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.643 [2024-02-14 19:23:22.922976] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x232f090 with addr=10.0.0.3, port=4420 00:22:45.643 [2024-02-14 19:23:22.922985] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232f090 is same with the state(5) to be set 00:22:45.643 [2024-02-14 19:23:22.922998] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232f090 (9): Bad file descriptor 00:22:45.643 [2024-02-14 19:23:22.923011] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:45.643 [2024-02-14 19:23:22.923019] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:45.643 [2024-02-14 19:23:22.923027] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:45.643 [2024-02-14 19:23:22.923039] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:45.643 [2024-02-14 19:23:22.930025] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:45.643 [2024-02-14 19:23:22.930094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.643 [2024-02-14 19:23:22.930133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.643 [2024-02-14 19:23:22.930147] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x237f960 with addr=10.0.0.2, port=4420 00:22:45.643 [2024-02-14 19:23:22.930158] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f960 is same with the state(5) to be set 00:22:45.643 [2024-02-14 19:23:22.930171] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237f960 (9): Bad file descriptor 00:22:45.643 [2024-02-14 19:23:22.930197] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:45.643 [2024-02-14 19:23:22.930207] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:45.643 [2024-02-14 19:23:22.930214] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:45.643 [2024-02-14 19:23:22.930227] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:45.643 [2024-02-14 19:23:22.932881] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:45.643 [2024-02-14 19:23:22.932945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.643 [2024-02-14 19:23:22.932982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.643 [2024-02-14 19:23:22.932995] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x232f090 with addr=10.0.0.3, port=4420 00:22:45.643 [2024-02-14 19:23:22.933004] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232f090 is same with the state(5) to be set 00:22:45.643 [2024-02-14 19:23:22.933019] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232f090 (9): Bad file descriptor 00:22:45.643 [2024-02-14 19:23:22.933031] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:45.643 [2024-02-14 19:23:22.933039] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:45.643 [2024-02-14 19:23:22.933047] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:45.643 [2024-02-14 19:23:22.933059] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:45.643 [2024-02-14 19:23:22.940067] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:45.643 [2024-02-14 19:23:22.940131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.643 [2024-02-14 19:23:22.940168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.643 [2024-02-14 19:23:22.940180] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x237f960 with addr=10.0.0.2, port=4420 00:22:45.643 [2024-02-14 19:23:22.940190] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f960 is same with the state(5) to be set 00:22:45.643 [2024-02-14 19:23:22.940203] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237f960 (9): Bad file descriptor 00:22:45.643 [2024-02-14 19:23:22.940227] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:45.643 [2024-02-14 19:23:22.940236] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:45.643 [2024-02-14 19:23:22.940244] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:45.643 [2024-02-14 19:23:22.940256] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:45.643 [2024-02-14 19:23:22.942925] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:45.643 [2024-02-14 19:23:22.942987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.643 [2024-02-14 19:23:22.943024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.643 [2024-02-14 19:23:22.943037] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x232f090 with addr=10.0.0.3, port=4420 00:22:45.643 [2024-02-14 19:23:22.943046] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232f090 is same with the state(5) to be set 00:22:45.643 [2024-02-14 19:23:22.943059] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232f090 (9): Bad file descriptor 00:22:45.643 [2024-02-14 19:23:22.943071] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:45.643 [2024-02-14 19:23:22.943079] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:45.643 [2024-02-14 19:23:22.943086] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:45.643 [2024-02-14 19:23:22.943098] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:45.643 [2024-02-14 19:23:22.950107] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:45.643 [2024-02-14 19:23:22.950172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.643 [2024-02-14 19:23:22.950208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.643 [2024-02-14 19:23:22.950221] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x237f960 with addr=10.0.0.2, port=4420 00:22:45.643 [2024-02-14 19:23:22.950230] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f960 is same with the state(5) to be set 00:22:45.643 [2024-02-14 19:23:22.950243] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237f960 (9): Bad file descriptor 00:22:45.643 [2024-02-14 19:23:22.950268] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:45.644 [2024-02-14 19:23:22.950277] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:45.644 [2024-02-14 19:23:22.950284] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:45.644 [2024-02-14 19:23:22.950296] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:45.644 [2024-02-14 19:23:22.952964] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:45.644 [2024-02-14 19:23:22.953027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.644 [2024-02-14 19:23:22.953063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.644 [2024-02-14 19:23:22.953077] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x232f090 with addr=10.0.0.3, port=4420 00:22:45.644 [2024-02-14 19:23:22.953086] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232f090 is same with the state(5) to be set 00:22:45.644 [2024-02-14 19:23:22.953100] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232f090 (9): Bad file descriptor 00:22:45.644 [2024-02-14 19:23:22.953113] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:45.644 [2024-02-14 19:23:22.953120] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:45.644 [2024-02-14 19:23:22.953128] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:45.644 [2024-02-14 19:23:22.953139] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:45.644 [2024-02-14 19:23:22.960149] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:45.644 [2024-02-14 19:23:22.960212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.644 [2024-02-14 19:23:22.960249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.644 [2024-02-14 19:23:22.960262] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x237f960 with addr=10.0.0.2, port=4420 00:22:45.644 [2024-02-14 19:23:22.960272] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f960 is same with the state(5) to be set 00:22:45.644 [2024-02-14 19:23:22.960286] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237f960 (9): Bad file descriptor 00:22:45.644 [2024-02-14 19:23:22.960310] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:45.644 [2024-02-14 19:23:22.960319] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:45.644 [2024-02-14 19:23:22.960327] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:45.644 [2024-02-14 19:23:22.960339] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:45.644 [2024-02-14 19:23:22.963004] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:45.644 [2024-02-14 19:23:22.963066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.644 [2024-02-14 19:23:22.963103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.644 [2024-02-14 19:23:22.963117] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x232f090 with addr=10.0.0.3, port=4420 00:22:45.644 [2024-02-14 19:23:22.963125] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232f090 is same with the state(5) to be set 00:22:45.644 [2024-02-14 19:23:22.963138] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232f090 (9): Bad file descriptor 00:22:45.644 [2024-02-14 19:23:22.963151] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:45.644 [2024-02-14 19:23:22.963159] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:45.644 [2024-02-14 19:23:22.963166] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:45.644 [2024-02-14 19:23:22.963178] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:45.644 [2024-02-14 19:23:22.970193] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:45.644 [2024-02-14 19:23:22.970264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.644 [2024-02-14 19:23:22.970303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.644 [2024-02-14 19:23:22.970316] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x237f960 with addr=10.0.0.2, port=4420 00:22:45.644 [2024-02-14 19:23:22.970326] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f960 is same with the state(5) to be set 00:22:45.644 [2024-02-14 19:23:22.970339] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237f960 (9): Bad file descriptor 00:22:45.644 [2024-02-14 19:23:22.970365] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:45.644 [2024-02-14 19:23:22.970374] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:45.644 [2024-02-14 19:23:22.970382] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:45.644 [2024-02-14 19:23:22.970394] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:45.644 [2024-02-14 19:23:22.973044] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:45.644 [2024-02-14 19:23:22.973108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.644 [2024-02-14 19:23:22.973145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.644 [2024-02-14 19:23:22.973158] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x232f090 with addr=10.0.0.3, port=4420 00:22:45.644 [2024-02-14 19:23:22.973170] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232f090 is same with the state(5) to be set 00:22:45.644 [2024-02-14 19:23:22.973182] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232f090 (9): Bad file descriptor 00:22:45.644 [2024-02-14 19:23:22.973194] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:45.644 [2024-02-14 19:23:22.973202] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:45.644 [2024-02-14 19:23:22.973210] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:45.644 [2024-02-14 19:23:22.973222] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:45.644 [2024-02-14 19:23:22.980239] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:45.644 [2024-02-14 19:23:22.980303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.644 [2024-02-14 19:23:22.980340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.644 [2024-02-14 19:23:22.980353] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x237f960 with addr=10.0.0.2, port=4420 00:22:45.644 [2024-02-14 19:23:22.980361] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f960 is same with the state(5) to be set 00:22:45.644 [2024-02-14 19:23:22.980377] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237f960 (9): Bad file descriptor 00:22:45.644 [2024-02-14 19:23:22.980401] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:45.644 [2024-02-14 19:23:22.980410] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:45.644 [2024-02-14 19:23:22.980418] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:45.644 [2024-02-14 19:23:22.980429] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:45.644 [2024-02-14 19:23:22.983085] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:45.644 [2024-02-14 19:23:22.983147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.644 [2024-02-14 19:23:22.983184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.644 [2024-02-14 19:23:22.983197] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x232f090 with addr=10.0.0.3, port=4420 00:22:45.644 [2024-02-14 19:23:22.983206] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232f090 is same with the state(5) to be set 00:22:45.644 [2024-02-14 19:23:22.983218] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232f090 (9): Bad file descriptor 00:22:45.644 [2024-02-14 19:23:22.983230] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:45.644 [2024-02-14 19:23:22.983238] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:45.644 [2024-02-14 19:23:22.983246] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:45.644 [2024-02-14 19:23:22.983257] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:45.644 [2024-02-14 19:23:22.990282] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:45.644 [2024-02-14 19:23:22.990345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.644 [2024-02-14 19:23:22.990382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.644 [2024-02-14 19:23:22.990395] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x237f960 with addr=10.0.0.2, port=4420 00:22:45.644 [2024-02-14 19:23:22.990404] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f960 is same with the state(5) to be set 00:22:45.644 [2024-02-14 19:23:22.990417] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237f960 (9): Bad file descriptor 00:22:45.644 [2024-02-14 19:23:22.990443] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:45.644 [2024-02-14 19:23:22.990452] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:45.644 [2024-02-14 19:23:22.990461] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:45.644 [2024-02-14 19:23:22.990473] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:45.644 [2024-02-14 19:23:22.993124] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:45.644 [2024-02-14 19:23:22.993186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.644 [2024-02-14 19:23:22.993223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.644 [2024-02-14 19:23:22.993236] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x232f090 with addr=10.0.0.3, port=4420 00:22:45.644 [2024-02-14 19:23:22.993245] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232f090 is same with the state(5) to be set 00:22:45.644 [2024-02-14 19:23:22.993258] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232f090 (9): Bad file descriptor 00:22:45.645 [2024-02-14 19:23:22.993270] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:45.645 [2024-02-14 19:23:22.993278] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:45.645 [2024-02-14 19:23:22.993285] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:45.645 [2024-02-14 19:23:22.993297] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:45.645 [2024-02-14 19:23:23.000322] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:45.645 [2024-02-14 19:23:23.000385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.645 [2024-02-14 19:23:23.000422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.645 [2024-02-14 19:23:23.000435] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x237f960 with addr=10.0.0.2, port=4420 00:22:45.645 [2024-02-14 19:23:23.000443] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f960 is same with the state(5) to be set 00:22:45.645 [2024-02-14 19:23:23.000456] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237f960 (9): Bad file descriptor 00:22:45.645 [2024-02-14 19:23:23.000480] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:45.645 [2024-02-14 19:23:23.000499] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:45.645 [2024-02-14 19:23:23.000507] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:45.645 [2024-02-14 19:23:23.000520] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:45.645 [2024-02-14 19:23:23.003165] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:45.645 [2024-02-14 19:23:23.003228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.645 [2024-02-14 19:23:23.003265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.645 [2024-02-14 19:23:23.003278] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x232f090 with addr=10.0.0.3, port=4420 00:22:45.645 [2024-02-14 19:23:23.003287] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232f090 is same with the state(5) to be set 00:22:45.645 [2024-02-14 19:23:23.003299] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232f090 (9): Bad file descriptor 00:22:45.645 [2024-02-14 19:23:23.003312] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:45.645 [2024-02-14 19:23:23.003319] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:45.645 [2024-02-14 19:23:23.003327] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:45.645 [2024-02-14 19:23:23.003338] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:45.645 [2024-02-14 19:23:23.008481] bdev_nvme.c:6491:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:45.645 [2024-02-14 19:23:23.008559] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:45.645 [2024-02-14 19:23:23.008577] bdev_nvme.c:6667:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:45.645 [2024-02-14 19:23:23.008605] bdev_nvme.c:6491:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:22:45.645 [2024-02-14 19:23:23.008618] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:45.645 [2024-02-14 19:23:23.008642] bdev_nvme.c:6667:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:45.904 [2024-02-14 19:23:23.096569] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:45.904 [2024-02-14 19:23:23.096617] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:46.471 19:23:23 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:22:46.730 19:23:23 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:46.730 19:23:23 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:46.730 19:23:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.730 19:23:23 -- common/autotest_common.sh@10 -- # set +x 00:22:46.730 19:23:23 -- host/mdns_discovery.sh@68 -- # sort 00:22:46.730 19:23:23 -- host/mdns_discovery.sh@68 -- # xargs 00:22:46.730 19:23:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.730 19:23:23 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:22:46.730 19:23:23 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:22:46.730 19:23:23 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:46.730 19:23:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.730 19:23:23 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:46.730 19:23:23 -- common/autotest_common.sh@10 -- # set +x 00:22:46.730 19:23:23 -- host/mdns_discovery.sh@64 -- # sort 00:22:46.730 19:23:23 -- host/mdns_discovery.sh@64 -- # xargs 00:22:46.730 19:23:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.730 19:23:23 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:46.730 19:23:23 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:22:46.730 19:23:24 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:22:46.730 19:23:24 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:46.730 19:23:24 -- host/mdns_discovery.sh@72 -- # xargs 00:22:46.730 19:23:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.730 19:23:24 -- common/autotest_common.sh@10 -- # set +x 00:22:46.730 19:23:24 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:46.730 19:23:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.730 19:23:24 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:22:46.730 19:23:24 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:22:46.730 19:23:24 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:22:46.730 19:23:24 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:46.730 19:23:24 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:46.730 19:23:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.730 19:23:24 -- host/mdns_discovery.sh@72 -- # xargs 00:22:46.730 19:23:24 -- common/autotest_common.sh@10 -- # set +x 00:22:46.730 19:23:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.730 19:23:24 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:22:46.730 19:23:24 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:22:46.730 19:23:24 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:22:46.730 19:23:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.730 19:23:24 -- common/autotest_common.sh@10 -- # set +x 00:22:46.730 19:23:24 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:46.730 19:23:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.988 19:23:24 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:22:46.988 19:23:24 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:22:46.988 19:23:24 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:22:46.988 19:23:24 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:22:46.988 19:23:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.988 19:23:24 -- common/autotest_common.sh@10 -- # set +x 00:22:46.988 19:23:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.988 19:23:24 -- host/mdns_discovery.sh@172 -- # sleep 1 00:22:46.988 [2024-02-14 19:23:24.247045] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:22:47.924 19:23:25 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:22:47.924 19:23:25 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:22:47.924 19:23:25 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:22:47.924 19:23:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.924 19:23:25 -- host/mdns_discovery.sh@80 -- # xargs 00:22:47.924 19:23:25 -- host/mdns_discovery.sh@80 -- # sort 00:22:47.924 19:23:25 -- common/autotest_common.sh@10 -- # set +x 00:22:47.924 19:23:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.924 19:23:25 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:22:47.924 19:23:25 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:22:47.924 19:23:25 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:47.924 19:23:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.924 19:23:25 -- common/autotest_common.sh@10 -- # set +x 00:22:47.924 19:23:25 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:47.924 19:23:25 -- host/mdns_discovery.sh@68 -- # sort 00:22:47.924 19:23:25 -- host/mdns_discovery.sh@68 -- # xargs 00:22:47.924 19:23:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.924 19:23:25 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:22:47.924 19:23:25 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:22:47.924 19:23:25 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:47.924 19:23:25 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:47.924 19:23:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.924 19:23:25 -- host/mdns_discovery.sh@64 -- # sort 00:22:47.924 19:23:25 -- common/autotest_common.sh@10 -- # set +x 00:22:47.924 19:23:25 -- host/mdns_discovery.sh@64 -- # xargs 00:22:47.924 19:23:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.924 19:23:25 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:22:47.924 19:23:25 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:22:47.924 19:23:25 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:22:47.924 19:23:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.924 19:23:25 -- common/autotest_common.sh@10 -- # set +x 00:22:47.924 19:23:25 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:47.924 19:23:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.183 19:23:25 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:22:48.183 19:23:25 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:22:48.183 19:23:25 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:22:48.183 19:23:25 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:48.183 19:23:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.183 19:23:25 -- common/autotest_common.sh@10 -- # set +x 00:22:48.183 19:23:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.183 19:23:25 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:22:48.183 19:23:25 -- common/autotest_common.sh@638 -- # local es=0 00:22:48.183 19:23:25 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:22:48.183 19:23:25 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:22:48.183 19:23:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:48.183 19:23:25 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:22:48.183 19:23:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:48.183 19:23:25 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:22:48.183 19:23:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.183 19:23:25 -- common/autotest_common.sh@10 -- # set +x 00:22:48.183 [2024-02-14 19:23:25.396590] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:22:48.183 2024/02/14 19:23:25 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:22:48.183 request: 00:22:48.183 { 00:22:48.183 "method": "bdev_nvme_start_mdns_discovery", 00:22:48.183 "params": { 00:22:48.183 "name": "mdns", 00:22:48.183 "svcname": "_nvme-disc._http", 00:22:48.183 "hostnqn": "nqn.2021-12.io.spdk:test" 00:22:48.183 } 00:22:48.183 } 00:22:48.183 Got JSON-RPC error response 00:22:48.183 GoRPCClient: error on JSON-RPC call 00:22:48.183 19:23:25 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:22:48.183 19:23:25 -- common/autotest_common.sh@641 -- # es=1 00:22:48.183 19:23:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:48.183 19:23:25 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:48.183 19:23:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:48.183 19:23:25 -- host/mdns_discovery.sh@183 -- # sleep 5 00:22:48.441 [2024-02-14 19:23:25.785023] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:22:48.700 [2024-02-14 19:23:25.885021] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:22:48.700 [2024-02-14 19:23:25.985026] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:22:48.700 [2024-02-14 19:23:25.985181] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:22:48.700 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:22:48.700 cookie is 0 00:22:48.700 is_local: 1 00:22:48.700 our_own: 0 00:22:48.700 wide_area: 0 00:22:48.700 multicast: 1 00:22:48.700 cached: 1 00:22:48.700 [2024-02-14 19:23:26.085026] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:22:48.700 [2024-02-14 19:23:26.085194] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:22:48.700 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:22:48.700 cookie is 0 00:22:48.700 is_local: 1 00:22:48.700 our_own: 0 00:22:48.700 wide_area: 0 00:22:48.700 multicast: 1 00:22:48.700 cached: 1 00:22:49.633 [2024-02-14 19:23:26.991743] bdev_nvme.c:6704:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:49.633 [2024-02-14 19:23:26.991760] bdev_nvme.c:6784:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:49.633 [2024-02-14 19:23:26.991775] bdev_nvme.c:6667:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:49.891 [2024-02-14 19:23:27.077830] bdev_nvme.c:6633:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:22:49.891 [2024-02-14 19:23:27.091664] bdev_nvme.c:6704:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:49.891 [2024-02-14 19:23:27.091682] bdev_nvme.c:6784:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:49.891 [2024-02-14 19:23:27.091695] bdev_nvme.c:6667:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:49.891 [2024-02-14 19:23:27.140216] bdev_nvme.c:6523:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:22:49.891 [2024-02-14 19:23:27.140239] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:49.891 [2024-02-14 19:23:27.178304] bdev_nvme.c:6633:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:22:49.891 [2024-02-14 19:23:27.236839] bdev_nvme.c:6523:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:22:49.891 [2024-02-14 19:23:27.236862] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:53.178 19:23:30 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:22:53.178 19:23:30 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:22:53.178 19:23:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.178 19:23:30 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:22:53.178 19:23:30 -- common/autotest_common.sh@10 -- # set +x 00:22:53.178 19:23:30 -- host/mdns_discovery.sh@80 -- # sort 00:22:53.178 19:23:30 -- host/mdns_discovery.sh@80 -- # xargs 00:22:53.178 19:23:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.178 19:23:30 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:22:53.178 19:23:30 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:22:53.178 19:23:30 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:53.178 19:23:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.178 19:23:30 -- host/mdns_discovery.sh@76 -- # sort 00:22:53.178 19:23:30 -- common/autotest_common.sh@10 -- # set +x 00:22:53.178 19:23:30 -- host/mdns_discovery.sh@76 -- # xargs 00:22:53.178 19:23:30 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:22:53.178 19:23:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.178 19:23:30 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:22:53.178 19:23:30 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:22:53.178 19:23:30 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:53.178 19:23:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.178 19:23:30 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:53.178 19:23:30 -- host/mdns_discovery.sh@64 -- # sort 00:22:53.178 19:23:30 -- common/autotest_common.sh@10 -- # set +x 00:22:53.178 19:23:30 -- host/mdns_discovery.sh@64 -- # xargs 00:22:53.178 19:23:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.178 19:23:30 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:53.178 19:23:30 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:53.178 19:23:30 -- common/autotest_common.sh@638 -- # local es=0 00:22:53.178 19:23:30 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:53.178 19:23:30 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:22:53.178 19:23:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:53.178 19:23:30 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:22:53.178 19:23:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:53.178 19:23:30 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:53.178 19:23:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.178 19:23:30 -- common/autotest_common.sh@10 -- # set +x 00:22:53.178 [2024-02-14 19:23:30.584085] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:22:53.178 2024/02/14 19:23:30 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:22:53.178 request: 00:22:53.178 { 00:22:53.178 "method": "bdev_nvme_start_mdns_discovery", 00:22:53.178 "params": { 00:22:53.178 "name": "cdc", 00:22:53.178 "svcname": "_nvme-disc._tcp", 00:22:53.178 "hostnqn": "nqn.2021-12.io.spdk:test" 00:22:53.178 } 00:22:53.178 } 00:22:53.178 Got JSON-RPC error response 00:22:53.178 GoRPCClient: error on JSON-RPC call 00:22:53.178 19:23:30 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:22:53.178 19:23:30 -- common/autotest_common.sh@641 -- # es=1 00:22:53.178 19:23:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:53.178 19:23:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:53.178 19:23:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:53.178 19:23:30 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:22:53.178 19:23:30 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:53.178 19:23:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.178 19:23:30 -- common/autotest_common.sh@10 -- # set +x 00:22:53.437 19:23:30 -- host/mdns_discovery.sh@76 -- # sort 00:22:53.437 19:23:30 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:22:53.437 19:23:30 -- host/mdns_discovery.sh@76 -- # xargs 00:22:53.437 19:23:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.437 19:23:30 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:22:53.437 19:23:30 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:22:53.437 19:23:30 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:53.437 19:23:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.437 19:23:30 -- common/autotest_common.sh@10 -- # set +x 00:22:53.437 19:23:30 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:53.437 19:23:30 -- host/mdns_discovery.sh@64 -- # sort 00:22:53.437 19:23:30 -- host/mdns_discovery.sh@64 -- # xargs 00:22:53.437 19:23:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.437 19:23:30 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:53.437 19:23:30 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:22:53.437 19:23:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.437 19:23:30 -- common/autotest_common.sh@10 -- # set +x 00:22:53.437 19:23:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.437 19:23:30 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:22:53.437 19:23:30 -- host/mdns_discovery.sh@197 -- # kill 85955 00:22:53.437 19:23:30 -- host/mdns_discovery.sh@200 -- # wait 85955 00:22:53.696 [2024-02-14 19:23:30.855645] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:22:53.696 19:23:30 -- host/mdns_discovery.sh@201 -- # kill 86035 00:22:53.696 Got SIGTERM, quitting. 00:22:53.696 19:23:30 -- host/mdns_discovery.sh@202 -- # kill 85980 00:22:53.696 Got SIGTERM, quitting. 00:22:53.696 19:23:30 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:22:53.696 19:23:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:53.696 19:23:30 -- nvmf/common.sh@116 -- # sync 00:22:53.696 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:22:53.696 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:22:53.696 avahi-daemon 0.8 exiting. 00:22:53.696 19:23:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:53.696 19:23:31 -- nvmf/common.sh@119 -- # set +e 00:22:53.696 19:23:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:53.696 19:23:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:53.696 rmmod nvme_tcp 00:22:53.696 rmmod nvme_fabrics 00:22:53.696 rmmod nvme_keyring 00:22:53.696 19:23:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:53.696 19:23:31 -- nvmf/common.sh@123 -- # set -e 00:22:53.697 19:23:31 -- nvmf/common.sh@124 -- # return 0 00:22:53.697 19:23:31 -- nvmf/common.sh@477 -- # '[' -n 85905 ']' 00:22:53.697 19:23:31 -- nvmf/common.sh@478 -- # killprocess 85905 00:22:53.697 19:23:31 -- common/autotest_common.sh@924 -- # '[' -z 85905 ']' 00:22:53.697 19:23:31 -- common/autotest_common.sh@928 -- # kill -0 85905 00:22:53.697 19:23:31 -- common/autotest_common.sh@929 -- # uname 00:22:53.697 19:23:31 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:22:53.697 19:23:31 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 85905 00:22:53.697 19:23:31 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:22:53.697 19:23:31 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:22:53.697 killing process with pid 85905 00:22:53.697 19:23:31 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 85905' 00:22:53.697 19:23:31 -- common/autotest_common.sh@943 -- # kill 85905 00:22:53.697 19:23:31 -- common/autotest_common.sh@948 -- # wait 85905 00:22:53.956 19:23:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:53.956 19:23:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:53.956 19:23:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:53.956 19:23:31 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:53.956 19:23:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:53.956 19:23:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.956 19:23:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.956 19:23:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.956 19:23:31 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:53.956 ************************************ 00:22:53.956 END TEST nvmf_mdns_discovery 00:22:53.956 ************************************ 00:22:53.956 00:22:53.956 real 0m20.480s 00:22:53.956 user 0m40.226s 00:22:53.956 sys 0m1.900s 00:22:53.956 19:23:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:53.956 19:23:31 -- common/autotest_common.sh@10 -- # set +x 00:22:54.216 19:23:31 -- nvmf/nvmf.sh@114 -- # [[ 1 -eq 1 ]] 00:22:54.216 19:23:31 -- nvmf/nvmf.sh@115 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:22:54.216 19:23:31 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:22:54.216 19:23:31 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:22:54.216 19:23:31 -- common/autotest_common.sh@10 -- # set +x 00:22:54.216 ************************************ 00:22:54.216 START TEST nvmf_multipath 00:22:54.216 ************************************ 00:22:54.216 19:23:31 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:22:54.216 * Looking for test storage... 00:22:54.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:54.216 19:23:31 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:54.216 19:23:31 -- nvmf/common.sh@7 -- # uname -s 00:22:54.216 19:23:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.216 19:23:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.216 19:23:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.216 19:23:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.216 19:23:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.216 19:23:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.216 19:23:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.216 19:23:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.216 19:23:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.216 19:23:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.216 19:23:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:22:54.216 19:23:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:22:54.216 19:23:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.216 19:23:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.216 19:23:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:54.216 19:23:31 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:54.216 19:23:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.216 19:23:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.216 19:23:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.216 19:23:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.216 19:23:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.216 19:23:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.216 19:23:31 -- paths/export.sh@5 -- # export PATH 00:22:54.217 19:23:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.217 19:23:31 -- nvmf/common.sh@46 -- # : 0 00:22:54.217 19:23:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:54.217 19:23:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:54.217 19:23:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:54.217 19:23:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.217 19:23:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.217 19:23:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:54.217 19:23:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:54.217 19:23:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:54.217 19:23:31 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:54.217 19:23:31 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:54.217 19:23:31 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:54.217 19:23:31 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:54.217 19:23:31 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:54.217 19:23:31 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:54.217 19:23:31 -- host/multipath.sh@30 -- # nvmftestinit 00:22:54.217 19:23:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:54.217 19:23:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.217 19:23:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:54.217 19:23:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:54.217 19:23:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:54.217 19:23:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.217 19:23:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:54.217 19:23:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.217 19:23:31 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:54.217 19:23:31 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:54.217 19:23:31 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:54.217 19:23:31 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:54.217 19:23:31 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:54.217 19:23:31 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:54.217 19:23:31 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:54.217 19:23:31 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:54.217 19:23:31 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:54.217 19:23:31 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:54.217 19:23:31 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:54.217 19:23:31 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:54.217 19:23:31 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:54.217 19:23:31 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:54.217 19:23:31 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:54.217 19:23:31 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:54.217 19:23:31 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:54.217 19:23:31 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:54.217 19:23:31 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:54.217 19:23:31 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:54.217 Cannot find device "nvmf_tgt_br" 00:22:54.217 19:23:31 -- nvmf/common.sh@154 -- # true 00:22:54.217 19:23:31 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:54.217 Cannot find device "nvmf_tgt_br2" 00:22:54.217 19:23:31 -- nvmf/common.sh@155 -- # true 00:22:54.217 19:23:31 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:54.217 19:23:31 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:54.217 Cannot find device "nvmf_tgt_br" 00:22:54.217 19:23:31 -- nvmf/common.sh@157 -- # true 00:22:54.217 19:23:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:54.217 Cannot find device "nvmf_tgt_br2" 00:22:54.217 19:23:31 -- nvmf/common.sh@158 -- # true 00:22:54.217 19:23:31 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:54.217 19:23:31 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:54.476 19:23:31 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:54.476 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:54.476 19:23:31 -- nvmf/common.sh@161 -- # true 00:22:54.476 19:23:31 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:54.476 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:54.476 19:23:31 -- nvmf/common.sh@162 -- # true 00:22:54.476 19:23:31 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:54.476 19:23:31 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:54.476 19:23:31 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:54.476 19:23:31 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:54.476 19:23:31 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:54.476 19:23:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:54.476 19:23:31 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:54.476 19:23:31 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:54.476 19:23:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:54.476 19:23:31 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:54.476 19:23:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:54.476 19:23:31 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:54.476 19:23:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:54.476 19:23:31 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:54.476 19:23:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:54.476 19:23:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:54.476 19:23:31 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:54.476 19:23:31 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:54.476 19:23:31 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:54.476 19:23:31 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:54.476 19:23:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:54.476 19:23:31 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:54.476 19:23:31 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:54.476 19:23:31 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:54.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:54.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:22:54.476 00:22:54.476 --- 10.0.0.2 ping statistics --- 00:22:54.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.476 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:22:54.476 19:23:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:54.476 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:54.476 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:22:54.476 00:22:54.476 --- 10.0.0.3 ping statistics --- 00:22:54.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.476 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:22:54.476 19:23:31 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:54.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:54.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:22:54.476 00:22:54.476 --- 10.0.0.1 ping statistics --- 00:22:54.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.476 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:22:54.476 19:23:31 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:54.476 19:23:31 -- nvmf/common.sh@421 -- # return 0 00:22:54.476 19:23:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:54.476 19:23:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:54.476 19:23:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:54.476 19:23:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:54.476 19:23:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:54.476 19:23:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:54.476 19:23:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:54.476 19:23:31 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:22:54.476 19:23:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:54.476 19:23:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:54.476 19:23:31 -- common/autotest_common.sh@10 -- # set +x 00:22:54.476 19:23:31 -- nvmf/common.sh@469 -- # nvmfpid=86551 00:22:54.476 19:23:31 -- nvmf/common.sh@470 -- # waitforlisten 86551 00:22:54.476 19:23:31 -- common/autotest_common.sh@817 -- # '[' -z 86551 ']' 00:22:54.476 19:23:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.476 19:23:31 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:54.476 19:23:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:54.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.476 19:23:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.476 19:23:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:54.476 19:23:31 -- common/autotest_common.sh@10 -- # set +x 00:22:54.476 [2024-02-14 19:23:31.888814] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:22:54.476 [2024-02-14 19:23:31.888885] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.735 [2024-02-14 19:23:32.020723] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:54.735 [2024-02-14 19:23:32.110146] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:54.735 [2024-02-14 19:23:32.110300] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.735 [2024-02-14 19:23:32.110314] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.735 [2024-02-14 19:23:32.110322] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.735 [2024-02-14 19:23:32.110465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.735 [2024-02-14 19:23:32.110476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.672 19:23:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:55.672 19:23:32 -- common/autotest_common.sh@850 -- # return 0 00:22:55.672 19:23:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:55.672 19:23:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:55.672 19:23:32 -- common/autotest_common.sh@10 -- # set +x 00:22:55.672 19:23:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.672 19:23:32 -- host/multipath.sh@33 -- # nvmfapp_pid=86551 00:22:55.672 19:23:32 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:55.931 [2024-02-14 19:23:33.194067] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.931 19:23:33 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:56.190 Malloc0 00:22:56.190 19:23:33 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:56.448 19:23:33 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:56.707 19:23:33 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:56.707 [2024-02-14 19:23:34.117774] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.965 19:23:34 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:56.965 [2024-02-14 19:23:34.373960] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:57.224 19:23:34 -- host/multipath.sh@44 -- # bdevperf_pid=86649 00:22:57.224 19:23:34 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:57.224 19:23:34 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:57.224 19:23:34 -- host/multipath.sh@47 -- # waitforlisten 86649 /var/tmp/bdevperf.sock 00:22:57.224 19:23:34 -- common/autotest_common.sh@817 -- # '[' -z 86649 ']' 00:22:57.224 19:23:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:57.225 19:23:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:57.225 19:23:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:57.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:57.225 19:23:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:57.225 19:23:34 -- common/autotest_common.sh@10 -- # set +x 00:22:58.161 19:23:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:58.161 19:23:35 -- common/autotest_common.sh@850 -- # return 0 00:22:58.162 19:23:35 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:58.420 19:23:35 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:58.679 Nvme0n1 00:22:58.679 19:23:36 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:58.937 Nvme0n1 00:22:59.196 19:23:36 -- host/multipath.sh@78 -- # sleep 1 00:22:59.196 19:23:36 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:00.131 19:23:37 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:23:00.131 19:23:37 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:00.389 19:23:37 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:00.648 19:23:37 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:23:00.648 19:23:37 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86551 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:00.648 19:23:37 -- host/multipath.sh@65 -- # dtrace_pid=86742 00:23:00.648 19:23:37 -- host/multipath.sh@66 -- # sleep 6 00:23:07.213 19:23:43 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:07.213 19:23:43 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:07.213 19:23:44 -- host/multipath.sh@67 -- # active_port=4421 00:23:07.213 19:23:44 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:07.213 Attaching 4 probes... 00:23:07.213 @path[10.0.0.2, 4421]: 20170 00:23:07.213 @path[10.0.0.2, 4421]: 20518 00:23:07.213 @path[10.0.0.2, 4421]: 20566 00:23:07.213 @path[10.0.0.2, 4421]: 20476 00:23:07.213 @path[10.0.0.2, 4421]: 20840 00:23:07.213 19:23:44 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:07.213 19:23:44 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:07.213 19:23:44 -- host/multipath.sh@69 -- # sed -n 1p 00:23:07.213 19:23:44 -- host/multipath.sh@69 -- # port=4421 00:23:07.213 19:23:44 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:07.213 19:23:44 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:07.213 19:23:44 -- host/multipath.sh@72 -- # kill 86742 00:23:07.213 19:23:44 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:07.213 19:23:44 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:23:07.213 19:23:44 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:07.213 19:23:44 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:07.213 19:23:44 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:23:07.213 19:23:44 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86551 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:07.213 19:23:44 -- host/multipath.sh@65 -- # dtrace_pid=86867 00:23:07.213 19:23:44 -- host/multipath.sh@66 -- # sleep 6 00:23:13.808 19:23:50 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:13.808 19:23:50 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:13.808 19:23:50 -- host/multipath.sh@67 -- # active_port=4420 00:23:13.808 19:23:50 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:13.808 Attaching 4 probes... 00:23:13.808 @path[10.0.0.2, 4420]: 22176 00:23:13.808 @path[10.0.0.2, 4420]: 22568 00:23:13.808 @path[10.0.0.2, 4420]: 22580 00:23:13.808 @path[10.0.0.2, 4420]: 22551 00:23:13.808 @path[10.0.0.2, 4420]: 22347 00:23:13.808 19:23:50 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:13.808 19:23:50 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:13.808 19:23:50 -- host/multipath.sh@69 -- # sed -n 1p 00:23:13.808 19:23:50 -- host/multipath.sh@69 -- # port=4420 00:23:13.808 19:23:50 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:13.808 19:23:50 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:13.808 19:23:50 -- host/multipath.sh@72 -- # kill 86867 00:23:13.808 19:23:50 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:13.808 19:23:50 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:23:13.808 19:23:50 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:13.808 19:23:51 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:14.078 19:23:51 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:23:14.078 19:23:51 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86551 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:14.078 19:23:51 -- host/multipath.sh@65 -- # dtrace_pid=87003 00:23:14.078 19:23:51 -- host/multipath.sh@66 -- # sleep 6 00:23:20.641 19:23:57 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:20.641 19:23:57 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:20.641 19:23:57 -- host/multipath.sh@67 -- # active_port=4421 00:23:20.641 19:23:57 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:20.641 Attaching 4 probes... 00:23:20.641 @path[10.0.0.2, 4421]: 15807 00:23:20.641 @path[10.0.0.2, 4421]: 20372 00:23:20.641 @path[10.0.0.2, 4421]: 20378 00:23:20.641 @path[10.0.0.2, 4421]: 20583 00:23:20.641 @path[10.0.0.2, 4421]: 20513 00:23:20.641 19:23:57 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:20.641 19:23:57 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:20.641 19:23:57 -- host/multipath.sh@69 -- # sed -n 1p 00:23:20.641 19:23:57 -- host/multipath.sh@69 -- # port=4421 00:23:20.641 19:23:57 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:20.641 19:23:57 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:20.641 19:23:57 -- host/multipath.sh@72 -- # kill 87003 00:23:20.641 19:23:57 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:20.641 19:23:57 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:23:20.641 19:23:57 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:20.641 19:23:57 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:20.899 19:23:58 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:23:20.899 19:23:58 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86551 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:20.899 19:23:58 -- host/multipath.sh@65 -- # dtrace_pid=87133 00:23:20.899 19:23:58 -- host/multipath.sh@66 -- # sleep 6 00:23:27.460 19:24:04 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:27.460 19:24:04 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:23:27.460 19:24:04 -- host/multipath.sh@67 -- # active_port= 00:23:27.460 19:24:04 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:27.460 Attaching 4 probes... 00:23:27.460 00:23:27.460 00:23:27.460 00:23:27.460 00:23:27.460 00:23:27.460 19:24:04 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:27.460 19:24:04 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:27.460 19:24:04 -- host/multipath.sh@69 -- # sed -n 1p 00:23:27.460 19:24:04 -- host/multipath.sh@69 -- # port= 00:23:27.460 19:24:04 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:23:27.460 19:24:04 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:23:27.460 19:24:04 -- host/multipath.sh@72 -- # kill 87133 00:23:27.460 19:24:04 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:27.460 19:24:04 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:23:27.460 19:24:04 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:27.460 19:24:04 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:27.460 19:24:04 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:23:27.460 19:24:04 -- host/multipath.sh@65 -- # dtrace_pid=87264 00:23:27.460 19:24:04 -- host/multipath.sh@66 -- # sleep 6 00:23:27.460 19:24:04 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86551 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:34.018 19:24:10 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:34.018 19:24:10 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:34.018 19:24:11 -- host/multipath.sh@67 -- # active_port=4421 00:23:34.018 19:24:11 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:34.018 Attaching 4 probes... 00:23:34.018 @path[10.0.0.2, 4421]: 19958 00:23:34.018 @path[10.0.0.2, 4421]: 20204 00:23:34.018 @path[10.0.0.2, 4421]: 20244 00:23:34.018 @path[10.0.0.2, 4421]: 20225 00:23:34.018 @path[10.0.0.2, 4421]: 20241 00:23:34.018 19:24:11 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:34.018 19:24:11 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:34.018 19:24:11 -- host/multipath.sh@69 -- # sed -n 1p 00:23:34.018 19:24:11 -- host/multipath.sh@69 -- # port=4421 00:23:34.018 19:24:11 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:34.018 19:24:11 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:34.018 19:24:11 -- host/multipath.sh@72 -- # kill 87264 00:23:34.018 19:24:11 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:34.018 19:24:11 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:34.018 [2024-02-14 19:24:11.233332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.018 [2024-02-14 19:24:11.233376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.018 [2024-02-14 19:24:11.233401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.018 [2024-02-14 19:24:11.233409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233466] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233578] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233671] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233703] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233711] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233720] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233811] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233827] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233866] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233940] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233961] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.233996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.234003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 [2024-02-14 19:24:11.234011] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32320 is same with the state(5) to be set 00:23:34.019 19:24:11 -- host/multipath.sh@101 -- # sleep 1 00:23:34.955 19:24:12 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:23:34.955 19:24:12 -- host/multipath.sh@65 -- # dtrace_pid=87394 00:23:34.955 19:24:12 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86551 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:34.955 19:24:12 -- host/multipath.sh@66 -- # sleep 6 00:23:41.517 19:24:18 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:41.517 19:24:18 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:41.517 19:24:18 -- host/multipath.sh@67 -- # active_port=4420 00:23:41.517 19:24:18 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:41.517 Attaching 4 probes... 00:23:41.517 @path[10.0.0.2, 4420]: 21456 00:23:41.517 @path[10.0.0.2, 4420]: 21783 00:23:41.517 @path[10.0.0.2, 4420]: 21866 00:23:41.517 @path[10.0.0.2, 4420]: 21825 00:23:41.517 @path[10.0.0.2, 4420]: 21820 00:23:41.517 19:24:18 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:41.517 19:24:18 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:41.517 19:24:18 -- host/multipath.sh@69 -- # sed -n 1p 00:23:41.517 19:24:18 -- host/multipath.sh@69 -- # port=4420 00:23:41.517 19:24:18 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:41.517 19:24:18 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:41.517 19:24:18 -- host/multipath.sh@72 -- # kill 87394 00:23:41.517 19:24:18 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:41.517 19:24:18 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:41.517 [2024-02-14 19:24:18.695885] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:41.517 19:24:18 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:41.517 19:24:18 -- host/multipath.sh@111 -- # sleep 6 00:23:48.075 19:24:24 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:23:48.075 19:24:24 -- host/multipath.sh@65 -- # dtrace_pid=87587 00:23:48.075 19:24:24 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86551 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:48.075 19:24:24 -- host/multipath.sh@66 -- # sleep 6 00:23:54.647 19:24:30 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:54.647 19:24:30 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:54.647 19:24:31 -- host/multipath.sh@67 -- # active_port=4421 00:23:54.647 19:24:31 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:54.647 Attaching 4 probes... 00:23:54.647 @path[10.0.0.2, 4421]: 19904 00:23:54.647 @path[10.0.0.2, 4421]: 20477 00:23:54.647 @path[10.0.0.2, 4421]: 20510 00:23:54.647 @path[10.0.0.2, 4421]: 20092 00:23:54.647 @path[10.0.0.2, 4421]: 19914 00:23:54.647 19:24:31 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:54.647 19:24:31 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:54.647 19:24:31 -- host/multipath.sh@69 -- # sed -n 1p 00:23:54.647 19:24:31 -- host/multipath.sh@69 -- # port=4421 00:23:54.647 19:24:31 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:54.647 19:24:31 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:54.647 19:24:31 -- host/multipath.sh@72 -- # kill 87587 00:23:54.647 19:24:31 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:54.647 19:24:31 -- host/multipath.sh@114 -- # killprocess 86649 00:23:54.647 19:24:31 -- common/autotest_common.sh@924 -- # '[' -z 86649 ']' 00:23:54.647 19:24:31 -- common/autotest_common.sh@928 -- # kill -0 86649 00:23:54.647 19:24:31 -- common/autotest_common.sh@929 -- # uname 00:23:54.647 19:24:31 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:23:54.647 19:24:31 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 86649 00:23:54.647 killing process with pid 86649 00:23:54.647 19:24:31 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:23:54.647 19:24:31 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:23:54.648 19:24:31 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 86649' 00:23:54.648 19:24:31 -- common/autotest_common.sh@943 -- # kill 86649 00:23:54.648 19:24:31 -- common/autotest_common.sh@948 -- # wait 86649 00:23:54.648 Connection closed with partial response: 00:23:54.648 00:23:54.648 00:23:54.648 19:24:31 -- host/multipath.sh@116 -- # wait 86649 00:23:54.648 19:24:31 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:54.648 [2024-02-14 19:23:34.438775] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:23:54.648 [2024-02-14 19:23:34.438868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86649 ] 00:23:54.648 [2024-02-14 19:23:34.579031] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.648 [2024-02-14 19:23:34.675648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:54.648 Running I/O for 90 seconds... 00:23:54.648 [2024-02-14 19:23:44.501609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.648 [2024-02-14 19:23:44.501714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.501777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:114120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.648 [2024-02-14 19:23:44.501801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.502115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.648 [2024-02-14 19:23:44.502141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.502167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.648 [2024-02-14 19:23:44.502186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.502208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:114144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.648 [2024-02-14 19:23:44.502224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.502245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.648 [2024-02-14 19:23:44.502262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.502284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.648 [2024-02-14 19:23:44.502301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.502322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:114168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.648 [2024-02-14 19:23:44.502337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.502357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.648 [2024-02-14 19:23:44.502373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.502394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.648 [2024-02-14 19:23:44.502410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.502431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:113488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.648 [2024-02-14 19:23:44.502470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.502531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.648 [2024-02-14 19:23:44.502551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.502572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.648 [2024-02-14 19:23:44.502588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.502609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.648 [2024-02-14 19:23:44.502625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.502646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.648 [2024-02-14 19:23:44.502662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.502682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.648 [2024-02-14 19:23:44.502699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.502721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.648 [2024-02-14 19:23:44.502736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.502757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.648 [2024-02-14 19:23:44.502772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.502792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.648 [2024-02-14 19:23:44.502808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.502828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.648 [2024-02-14 19:23:44.502843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.502863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.648 [2024-02-14 19:23:44.502886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.502913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.648 [2024-02-14 19:23:44.502946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.502982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.648 [2024-02-14 19:23:44.502998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.503030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.648 [2024-02-14 19:23:44.503047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.503068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.648 [2024-02-14 19:23:44.503084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.503104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:114184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.648 [2024-02-14 19:23:44.503120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.503140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.648 [2024-02-14 19:23:44.503156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.503176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.648 [2024-02-14 19:23:44.503192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.503213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.648 [2024-02-14 19:23:44.503233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.503254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.648 [2024-02-14 19:23:44.503278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.503298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.648 [2024-02-14 19:23:44.503314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.503334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:114232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.648 [2024-02-14 19:23:44.503349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.503369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.648 [2024-02-14 19:23:44.503385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:54.648 [2024-02-14 19:23:44.503406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:114248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.649 [2024-02-14 19:23:44.503421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.503441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.649 [2024-02-14 19:23:44.503457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.503508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:114264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.649 [2024-02-14 19:23:44.503535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.503557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:114272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.649 [2024-02-14 19:23:44.503574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.503595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:114280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.649 [2024-02-14 19:23:44.503611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.503632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:114288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.649 [2024-02-14 19:23:44.503648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.503668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:114296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.649 [2024-02-14 19:23:44.503684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.503704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.649 [2024-02-14 19:23:44.503719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.503740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.649 [2024-02-14 19:23:44.503756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.503777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.649 [2024-02-14 19:23:44.503793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.503813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.649 [2024-02-14 19:23:44.503829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.503859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:114336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.649 [2024-02-14 19:23:44.503875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.504396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.649 [2024-02-14 19:23:44.504423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.504450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.649 [2024-02-14 19:23:44.504467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.504530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.649 [2024-02-14 19:23:44.504551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.504572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:114368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.649 [2024-02-14 19:23:44.504589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.504610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:114376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.649 [2024-02-14 19:23:44.504626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.504647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.649 [2024-02-14 19:23:44.504663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.504683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:114392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.649 [2024-02-14 19:23:44.504699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.504719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.649 [2024-02-14 19:23:44.504735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.504756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:114408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.649 [2024-02-14 19:23:44.504771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.504792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:114416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.649 [2024-02-14 19:23:44.504809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.504830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:114424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.649 [2024-02-14 19:23:44.504845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.504874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:114432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.649 [2024-02-14 19:23:44.504893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.504929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:114440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.649 [2024-02-14 19:23:44.504944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.504977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:114448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.649 [2024-02-14 19:23:44.504993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.505015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:114456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.649 [2024-02-14 19:23:44.505070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.505096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:114464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.649 [2024-02-14 19:23:44.505113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.505134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.649 [2024-02-14 19:23:44.505150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.505172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.649 [2024-02-14 19:23:44.505189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.505209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.649 [2024-02-14 19:23:44.505225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.505245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.649 [2024-02-14 19:23:44.505261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.505282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.649 [2024-02-14 19:23:44.505298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.505318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.649 [2024-02-14 19:23:44.505334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.505355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.649 [2024-02-14 19:23:44.505370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.505391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.649 [2024-02-14 19:23:44.505406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.505427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:114472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.649 [2024-02-14 19:23:44.505442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.505463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:114480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.649 [2024-02-14 19:23:44.505479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.505527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.649 [2024-02-14 19:23:44.505551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:54.649 [2024-02-14 19:23:44.505574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:114496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.650 [2024-02-14 19:23:44.505589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.505609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.650 [2024-02-14 19:23:44.505624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.505644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:114512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.650 [2024-02-14 19:23:44.505660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.505680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.650 [2024-02-14 19:23:44.505695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.505715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:114528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.650 [2024-02-14 19:23:44.505730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.505749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.650 [2024-02-14 19:23:44.505764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.505785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.650 [2024-02-14 19:23:44.505800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.505820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.650 [2024-02-14 19:23:44.505835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.505855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.650 [2024-02-14 19:23:44.505869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.505889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.650 [2024-02-14 19:23:44.505904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.505930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.650 [2024-02-14 19:23:44.505944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.505964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.650 [2024-02-14 19:23:44.505979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.506006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.650 [2024-02-14 19:23:44.506022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.506042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.650 [2024-02-14 19:23:44.506057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.506077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.650 [2024-02-14 19:23:44.506092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.506112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:114552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.650 [2024-02-14 19:23:44.506127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.506147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:114024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.650 [2024-02-14 19:23:44.506162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.506182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:114032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.650 [2024-02-14 19:23:44.506200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.506220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:114048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.650 [2024-02-14 19:23:44.506235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.506255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:114056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.650 [2024-02-14 19:23:44.506270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.506291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:114064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.650 [2024-02-14 19:23:44.506306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.506326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:114072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.650 [2024-02-14 19:23:44.506341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.506362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:114080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.650 [2024-02-14 19:23:44.506377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.506397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:114096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.650 [2024-02-14 19:23:44.506412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.506440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:114560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.650 [2024-02-14 19:23:44.506457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.506478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:114568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.650 [2024-02-14 19:23:44.506519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.506543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.650 [2024-02-14 19:23:44.506558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.506579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:114584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.650 [2024-02-14 19:23:44.506595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.506615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:114592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.650 [2024-02-14 19:23:44.506630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.506650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:114600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.650 [2024-02-14 19:23:44.506665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.506686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.650 [2024-02-14 19:23:44.506702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.506722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:114616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.650 [2024-02-14 19:23:44.506737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.506757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:114624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.650 [2024-02-14 19:23:44.506772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.506793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.650 [2024-02-14 19:23:44.506809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.506830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.650 [2024-02-14 19:23:44.506845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.506867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:114648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.650 [2024-02-14 19:23:44.506893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.506933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:114656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.650 [2024-02-14 19:23:44.506953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.506974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:114664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.650 [2024-02-14 19:23:44.506990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:54.650 [2024-02-14 19:23:44.507826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.651 [2024-02-14 19:23:44.507855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:44.507890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:114680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.651 [2024-02-14 19:23:44.507908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:44.507929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.651 [2024-02-14 19:23:44.507944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:44.507965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.651 [2024-02-14 19:23:44.507986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:44.508006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:114704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.651 [2024-02-14 19:23:44.508022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:44.508043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:114712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.651 [2024-02-14 19:23:44.508058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:44.508079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:114720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.651 [2024-02-14 19:23:44.508094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:44.508115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:114728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.651 [2024-02-14 19:23:44.508130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:44.508151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:114736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.651 [2024-02-14 19:23:44.508167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:44.508187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.651 [2024-02-14 19:23:44.508203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:44.508223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:114752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.651 [2024-02-14 19:23:44.508251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:44.508276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.651 [2024-02-14 19:23:44.508292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:44.508313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:114768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.651 [2024-02-14 19:23:44.508328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:44.508348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:114776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.651 [2024-02-14 19:23:44.508363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:44.508385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:114784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.651 [2024-02-14 19:23:44.508400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:44.508421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.651 [2024-02-14 19:23:44.508436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:44.508456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.651 [2024-02-14 19:23:44.508472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:44.508507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:114808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.651 [2024-02-14 19:23:44.508526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:51.014552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.651 [2024-02-14 19:23:51.014645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:51.014707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.651 [2024-02-14 19:23:51.014729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:51.014752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:106928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.651 [2024-02-14 19:23:51.014767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:51.014788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.651 [2024-02-14 19:23:51.014803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:51.014824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:106944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.651 [2024-02-14 19:23:51.014877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:51.014901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:106952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.651 [2024-02-14 19:23:51.014916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:51.014946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:106960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.651 [2024-02-14 19:23:51.014965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:51.014990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.651 [2024-02-14 19:23:51.015005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:51.015026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:106976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.651 [2024-02-14 19:23:51.015041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:51.015062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:106392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.651 [2024-02-14 19:23:51.015077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:51.015097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:106416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.651 [2024-02-14 19:23:51.015112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:51.015131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:106424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.651 [2024-02-14 19:23:51.015146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:51.015166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.651 [2024-02-14 19:23:51.015181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:51.015202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:106440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.651 [2024-02-14 19:23:51.015217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:51.015237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:106448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.651 [2024-02-14 19:23:51.015252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:51.015272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:106488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.651 [2024-02-14 19:23:51.015286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:51.015306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:106504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.651 [2024-02-14 19:23:51.015323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:51.015354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.651 [2024-02-14 19:23:51.015370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:51.015391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:106992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.651 [2024-02-14 19:23:51.015406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:51.015426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.651 [2024-02-14 19:23:51.015441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:54.651 [2024-02-14 19:23:51.015462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.652 [2024-02-14 19:23:51.015477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.016121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.652 [2024-02-14 19:23:51.016148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.016176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.652 [2024-02-14 19:23:51.016193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.016215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.652 [2024-02-14 19:23:51.016232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.016255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:107040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.652 [2024-02-14 19:23:51.016270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.016293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:107048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.652 [2024-02-14 19:23:51.016308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.016332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:107056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.652 [2024-02-14 19:23:51.016347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.016370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.652 [2024-02-14 19:23:51.016388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.016411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:107072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.652 [2024-02-14 19:23:51.016427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.016462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.652 [2024-02-14 19:23:51.016479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.016534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.652 [2024-02-14 19:23:51.016551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.016573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:106512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.652 [2024-02-14 19:23:51.016589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.016612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:106520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.652 [2024-02-14 19:23:51.016628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.016651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:106528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.652 [2024-02-14 19:23:51.016666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.016689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:106536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.652 [2024-02-14 19:23:51.016706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.016729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.652 [2024-02-14 19:23:51.016745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.016769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.652 [2024-02-14 19:23:51.016785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.016807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.652 [2024-02-14 19:23:51.016823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.016847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:106592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.652 [2024-02-14 19:23:51.016863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.016892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.652 [2024-02-14 19:23:51.016908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.016930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.652 [2024-02-14 19:23:51.016946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.016978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.652 [2024-02-14 19:23:51.016995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.017018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:107120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.652 [2024-02-14 19:23:51.017034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.017056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.652 [2024-02-14 19:23:51.017072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.017094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:107136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.652 [2024-02-14 19:23:51.017110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.017133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:107144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.652 [2024-02-14 19:23:51.017150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.017172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:106616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.652 [2024-02-14 19:23:51.017188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.017211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.652 [2024-02-14 19:23:51.017227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.017250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.652 [2024-02-14 19:23:51.017267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.017289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:106664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.652 [2024-02-14 19:23:51.017304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.017327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:106672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.652 [2024-02-14 19:23:51.017342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.652 [2024-02-14 19:23:51.017365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:106688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.017380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.017403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:106728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.017418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.017441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:106736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.017463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.017502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.017522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.017546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.017562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.017585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:107168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.017601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.017623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.017639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.017661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.653 [2024-02-14 19:23:51.017677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.017699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:107192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.017715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.017738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:107200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.017753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.017776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.017792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.017814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.653 [2024-02-14 19:23:51.017830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.017853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.017868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.017892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:107232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.017908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.017931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.017955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.017980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.017996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.018019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.018035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.018058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.653 [2024-02-14 19:23:51.018074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.018097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:107272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.018113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.018135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.653 [2024-02-14 19:23:51.018151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.018174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:107288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.018190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.018212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.018227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.018250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.653 [2024-02-14 19:23:51.018266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.018289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.018304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.018326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:106760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.018342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.018365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:106768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.018381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.018404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:106776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.018427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.018452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:106784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.018468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.018502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:106800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.018521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.018731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.018756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.018800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:106880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.018819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.018846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.018862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.018888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.018903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.018931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.653 [2024-02-14 19:23:51.018960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.018988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.019004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.019031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.019048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.019074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.653 [2024-02-14 19:23:51.019089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.019116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.653 [2024-02-14 19:23:51.019131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:54.653 [2024-02-14 19:23:51.019157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.654 [2024-02-14 19:23:51.019173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:51.019211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:107376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.654 [2024-02-14 19:23:51.019228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:51.019254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.654 [2024-02-14 19:23:51.019270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:51.019297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.654 [2024-02-14 19:23:51.019312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:51.019338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.654 [2024-02-14 19:23:51.019364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:51.019390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.654 [2024-02-14 19:23:51.019406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:51.019433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.654 [2024-02-14 19:23:51.019448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:51.019475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.654 [2024-02-14 19:23:51.019505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:51.019535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.654 [2024-02-14 19:23:51.019553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:51.019579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.654 [2024-02-14 19:23:51.019595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:51.019621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.654 [2024-02-14 19:23:51.019638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:51.019664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.654 [2024-02-14 19:23:51.019681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:51.019707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.654 [2024-02-14 19:23:51.019723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:51.019758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.654 [2024-02-14 19:23:51.019776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:51.019802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:107480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.654 [2024-02-14 19:23:51.019818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:51.019845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.654 [2024-02-14 19:23:51.019861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:51.019887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.654 [2024-02-14 19:23:51.019904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:51.019930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.654 [2024-02-14 19:23:51.019946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:51.019972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.654 [2024-02-14 19:23:51.019988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:51.020014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:107520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.654 [2024-02-14 19:23:51.020030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:58.064322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.654 [2024-02-14 19:23:58.064368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:58.064410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.654 [2024-02-14 19:23:58.064432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:58.064454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.654 [2024-02-14 19:23:58.064471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:58.064509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:106888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.654 [2024-02-14 19:23:58.064535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:58.064556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.654 [2024-02-14 19:23:58.064572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:58.064594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.654 [2024-02-14 19:23:58.064623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:58.064647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.654 [2024-02-14 19:23:58.064663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:58.064683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:106216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.654 [2024-02-14 19:23:58.064697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:58.064717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:106224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.654 [2024-02-14 19:23:58.064732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:58.064752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.654 [2024-02-14 19:23:58.064766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:58.064786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:106240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.654 [2024-02-14 19:23:58.064801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:58.064821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.654 [2024-02-14 19:23:58.064836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:58.064861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:106264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.654 [2024-02-14 19:23:58.064875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:58.064895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:106272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.654 [2024-02-14 19:23:58.064910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:58.064930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:106288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.654 [2024-02-14 19:23:58.064944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:58.064964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.654 [2024-02-14 19:23:58.064978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:58.064998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.654 [2024-02-14 19:23:58.065012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:58.065033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.654 [2024-02-14 19:23:58.065057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:54.654 [2024-02-14 19:23:58.065080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:106352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.655 [2024-02-14 19:23:58.065096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.065116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:106376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.655 [2024-02-14 19:23:58.065131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.065152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:106384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.655 [2024-02-14 19:23:58.065166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.065186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:106424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.655 [2024-02-14 19:23:58.065201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.065221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:106912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.655 [2024-02-14 19:23:58.065236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.065256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.655 [2024-02-14 19:23:58.065271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.065290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.655 [2024-02-14 19:23:58.065305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.065325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:106936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.655 [2024-02-14 19:23:58.065340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.065359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.655 [2024-02-14 19:23:58.065374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.065394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.655 [2024-02-14 19:23:58.065409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.065429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:106960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.655 [2024-02-14 19:23:58.065444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.065465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.655 [2024-02-14 19:23:58.065499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.065526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.655 [2024-02-14 19:23:58.065543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.065563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:106984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.655 [2024-02-14 19:23:58.065578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.065598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.655 [2024-02-14 19:23:58.065613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.065634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:107000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.655 [2024-02-14 19:23:58.065650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.065788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:107008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.655 [2024-02-14 19:23:58.065812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.065840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:107016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.655 [2024-02-14 19:23:58.065857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.065880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:107024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.655 [2024-02-14 19:23:58.065895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.066048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.655 [2024-02-14 19:23:58.066071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.066096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:107040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.655 [2024-02-14 19:23:58.066113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.066136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.655 [2024-02-14 19:23:58.066151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.066173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:107056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.655 [2024-02-14 19:23:58.066188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.066211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.655 [2024-02-14 19:23:58.066226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.066260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.655 [2024-02-14 19:23:58.066277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.066300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:107080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.655 [2024-02-14 19:23:58.066316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.066339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.655 [2024-02-14 19:23:58.066357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.066380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.655 [2024-02-14 19:23:58.066395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.066418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:107104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.655 [2024-02-14 19:23:58.066434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.066458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.655 [2024-02-14 19:23:58.066474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.066521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.655 [2024-02-14 19:23:58.066540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.066565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:107128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.655 [2024-02-14 19:23:58.066581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.066604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.655 [2024-02-14 19:23:58.066620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.066642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:107144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.655 [2024-02-14 19:23:58.066658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.066693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:107152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.655 [2024-02-14 19:23:58.066710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.066733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.655 [2024-02-14 19:23:58.066749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.066780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.655 [2024-02-14 19:23:58.066797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.066820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.655 [2024-02-14 19:23:58.066837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:54.655 [2024-02-14 19:23:58.066859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.655 [2024-02-14 19:23:58.066874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.066897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.656 [2024-02-14 19:23:58.066912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.066935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.656 [2024-02-14 19:23:58.066996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.067023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:106440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.656 [2024-02-14 19:23:58.067040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.067065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:106464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.656 [2024-02-14 19:23:58.067081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.067105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:106472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.656 [2024-02-14 19:23:58.067121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.067145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:106504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.656 [2024-02-14 19:23:58.067162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.067186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:106520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.656 [2024-02-14 19:23:58.067203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.067227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:106536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.656 [2024-02-14 19:23:58.067243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.067267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:106552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.656 [2024-02-14 19:23:58.067283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.067308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.656 [2024-02-14 19:23:58.067347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.067372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:106584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.656 [2024-02-14 19:23:58.067387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.067409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:106592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.656 [2024-02-14 19:23:58.067425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.067449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:106616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.656 [2024-02-14 19:23:58.067464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.067487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:106672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.656 [2024-02-14 19:23:58.067513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.067541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:106696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.656 [2024-02-14 19:23:58.067557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.067580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:106712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.656 [2024-02-14 19:23:58.067596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.067619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:106728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.656 [2024-02-14 19:23:58.067634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.067657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.656 [2024-02-14 19:23:58.067673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.067696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.656 [2024-02-14 19:23:58.067711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.067734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.656 [2024-02-14 19:23:58.067749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.067772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.656 [2024-02-14 19:23:58.067787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.067811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:107232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.656 [2024-02-14 19:23:58.067838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.068042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.656 [2024-02-14 19:23:58.068068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.068098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:107248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.656 [2024-02-14 19:23:58.068124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.068150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:107256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.656 [2024-02-14 19:23:58.068166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.068192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.656 [2024-02-14 19:23:58.068207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.068232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.656 [2024-02-14 19:23:58.068249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.068275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:107280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.656 [2024-02-14 19:23:58.068291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.068316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.656 [2024-02-14 19:23:58.068332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.068357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.656 [2024-02-14 19:23:58.068372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.068398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.656 [2024-02-14 19:23:58.068414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.068439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.656 [2024-02-14 19:23:58.068455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:54.656 [2024-02-14 19:23:58.068481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.656 [2024-02-14 19:23:58.068518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.068546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.657 [2024-02-14 19:23:58.068573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.068602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.657 [2024-02-14 19:23:58.068618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.068643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.657 [2024-02-14 19:23:58.068659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.068684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.657 [2024-02-14 19:23:58.068700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.068725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.657 [2024-02-14 19:23:58.068741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.068766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.657 [2024-02-14 19:23:58.068793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.068820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.657 [2024-02-14 19:23:58.068837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.068864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.657 [2024-02-14 19:23:58.068879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.068904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.657 [2024-02-14 19:23:58.068920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.068945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:107400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.657 [2024-02-14 19:23:58.068961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.068985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.657 [2024-02-14 19:23:58.069001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.069026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:107416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.657 [2024-02-14 19:23:58.069042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.069067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.657 [2024-02-14 19:23:58.069082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.069116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:107432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.657 [2024-02-14 19:23:58.069133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.069159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.657 [2024-02-14 19:23:58.069174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.069199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.657 [2024-02-14 19:23:58.069215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.069240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.657 [2024-02-14 19:23:58.069256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.069281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.657 [2024-02-14 19:23:58.069297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.069322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.657 [2024-02-14 19:23:58.069337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.069362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.657 [2024-02-14 19:23:58.069378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.069403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.657 [2024-02-14 19:23:58.069418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.069443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.657 [2024-02-14 19:23:58.069464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.069503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.657 [2024-02-14 19:23:58.069529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.069556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.657 [2024-02-14 19:23:58.069572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.069598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:106752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.657 [2024-02-14 19:23:58.069614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.069649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:106768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.657 [2024-02-14 19:23:58.069666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.069691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:106784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.657 [2024-02-14 19:23:58.069708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.069733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:106792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.657 [2024-02-14 19:23:58.069750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.069775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:106824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.657 [2024-02-14 19:23:58.069791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.069817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.657 [2024-02-14 19:23:58.069832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.069857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:106848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.657 [2024-02-14 19:23:58.069873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:23:58.069899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:106856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.657 [2024-02-14 19:23:58.069915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:24:11.234458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:112680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.657 [2024-02-14 19:24:11.234633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:24:11.234661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.657 [2024-02-14 19:24:11.234676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:24:11.234692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.657 [2024-02-14 19:24:11.234705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:24:11.234720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.657 [2024-02-14 19:24:11.234733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:24:11.234748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.657 [2024-02-14 19:24:11.234760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:24:11.234775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.657 [2024-02-14 19:24:11.234811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.657 [2024-02-14 19:24:11.234827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.658 [2024-02-14 19:24:11.234841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.234866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.658 [2024-02-14 19:24:11.234886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.234908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.658 [2024-02-14 19:24:11.234929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.234943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:112088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.658 [2024-02-14 19:24:11.234966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.234984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:112104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.658 [2024-02-14 19:24:11.234996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.658 [2024-02-14 19:24:11.235024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:112136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.658 [2024-02-14 19:24:11.235051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.658 [2024-02-14 19:24:11.235080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.658 [2024-02-14 19:24:11.235107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.658 [2024-02-14 19:24:11.235134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.658 [2024-02-14 19:24:11.235163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.658 [2024-02-14 19:24:11.235191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.658 [2024-02-14 19:24:11.235230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.658 [2024-02-14 19:24:11.235264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.658 [2024-02-14 19:24:11.235299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.658 [2024-02-14 19:24:11.235327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.658 [2024-02-14 19:24:11.235355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.658 [2024-02-14 19:24:11.235383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.658 [2024-02-14 19:24:11.235410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.658 [2024-02-14 19:24:11.235438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.658 [2024-02-14 19:24:11.235465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.658 [2024-02-14 19:24:11.235506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.658 [2024-02-14 19:24:11.235536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.658 [2024-02-14 19:24:11.235563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.658 [2024-02-14 19:24:11.235598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.658 [2024-02-14 19:24:11.235628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.658 [2024-02-14 19:24:11.235656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.658 [2024-02-14 19:24:11.235684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.658 [2024-02-14 19:24:11.235712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.658 [2024-02-14 19:24:11.235740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.658 [2024-02-14 19:24:11.235768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.658 [2024-02-14 19:24:11.235795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.658 [2024-02-14 19:24:11.235822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.658 [2024-02-14 19:24:11.235858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.658 [2024-02-14 19:24:11.235897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.658 [2024-02-14 19:24:11.235924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.658 [2024-02-14 19:24:11.235951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.235973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.658 [2024-02-14 19:24:11.235986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.236000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.658 [2024-02-14 19:24:11.236014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.658 [2024-02-14 19:24:11.236028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.659 [2024-02-14 19:24:11.236041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.659 [2024-02-14 19:24:11.236068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.659 [2024-02-14 19:24:11.236095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.659 [2024-02-14 19:24:11.236123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.659 [2024-02-14 19:24:11.236150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.659 [2024-02-14 19:24:11.236178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.659 [2024-02-14 19:24:11.236205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.659 [2024-02-14 19:24:11.236232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.659 [2024-02-14 19:24:11.236259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.659 [2024-02-14 19:24:11.236287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.659 [2024-02-14 19:24:11.236320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.659 [2024-02-14 19:24:11.236348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.659 [2024-02-14 19:24:11.236376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.659 [2024-02-14 19:24:11.236404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.659 [2024-02-14 19:24:11.236431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.659 [2024-02-14 19:24:11.236458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.659 [2024-02-14 19:24:11.236506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.659 [2024-02-14 19:24:11.236537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.659 [2024-02-14 19:24:11.236565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.659 [2024-02-14 19:24:11.236602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.659 [2024-02-14 19:24:11.236628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.659 [2024-02-14 19:24:11.236657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.659 [2024-02-14 19:24:11.236684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.659 [2024-02-14 19:24:11.236721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.659 [2024-02-14 19:24:11.236748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:113104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.659 [2024-02-14 19:24:11.236776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.659 [2024-02-14 19:24:11.236803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.659 [2024-02-14 19:24:11.236830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.659 [2024-02-14 19:24:11.236857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.659 [2024-02-14 19:24:11.236896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.659 [2024-02-14 19:24:11.236923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.659 [2024-02-14 19:24:11.236950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.659 [2024-02-14 19:24:11.236977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.236991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.659 [2024-02-14 19:24:11.237004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.237018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.659 [2024-02-14 19:24:11.237032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.237048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.659 [2024-02-14 19:24:11.237061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.237081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.659 [2024-02-14 19:24:11.237095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.237109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.659 [2024-02-14 19:24:11.237123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.237137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.659 [2024-02-14 19:24:11.237150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.659 [2024-02-14 19:24:11.237165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.660 [2024-02-14 19:24:11.237177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.237192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.660 [2024-02-14 19:24:11.237206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.237221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.660 [2024-02-14 19:24:11.237234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.237249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.660 [2024-02-14 19:24:11.237262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.237277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.660 [2024-02-14 19:24:11.237289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.237304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.660 [2024-02-14 19:24:11.237317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.237331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.660 [2024-02-14 19:24:11.237344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.237358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:112560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.660 [2024-02-14 19:24:11.237371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.237385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.660 [2024-02-14 19:24:11.237398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.237413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.660 [2024-02-14 19:24:11.237432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.237447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.660 [2024-02-14 19:24:11.237461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.237475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.660 [2024-02-14 19:24:11.237514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.237531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.660 [2024-02-14 19:24:11.237545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.237559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.660 [2024-02-14 19:24:11.237572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.237587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.660 [2024-02-14 19:24:11.237600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.237614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.660 [2024-02-14 19:24:11.237628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.237643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.660 [2024-02-14 19:24:11.237656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.237670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.660 [2024-02-14 19:24:11.237684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.237699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.660 [2024-02-14 19:24:11.237712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.237726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.660 [2024-02-14 19:24:11.237739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.237753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.660 [2024-02-14 19:24:11.237766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.237780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.660 [2024-02-14 19:24:11.237793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.237815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.660 [2024-02-14 19:24:11.237828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.237843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.660 [2024-02-14 19:24:11.237856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.237870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.660 [2024-02-14 19:24:11.237883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.237897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.660 [2024-02-14 19:24:11.237909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.237926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.660 [2024-02-14 19:24:11.237939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.237961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.660 [2024-02-14 19:24:11.237974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.237989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.660 [2024-02-14 19:24:11.238001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.238025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.660 [2024-02-14 19:24:11.238039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.238053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.660 [2024-02-14 19:24:11.238066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.238080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.660 [2024-02-14 19:24:11.238093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.238108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.660 [2024-02-14 19:24:11.238121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.238135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.660 [2024-02-14 19:24:11.238148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.238162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.660 [2024-02-14 19:24:11.238182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.238197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.660 [2024-02-14 19:24:11.238210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.238225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.660 [2024-02-14 19:24:11.238237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.238252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:112688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.660 [2024-02-14 19:24:11.238266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.238280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.660 [2024-02-14 19:24:11.238298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.660 [2024-02-14 19:24:11.238312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.660 [2024-02-14 19:24:11.238325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.661 [2024-02-14 19:24:11.238341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.661 [2024-02-14 19:24:11.238353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.661 [2024-02-14 19:24:11.238367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:112752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.661 [2024-02-14 19:24:11.238380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.661 [2024-02-14 19:24:11.238394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.661 [2024-02-14 19:24:11.238407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.661 [2024-02-14 19:24:11.238427] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e49ff0 is same with the state(5) to be set 00:23:54.661 [2024-02-14 19:24:11.238445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.661 [2024-02-14 19:24:11.238456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.661 [2024-02-14 19:24:11.238466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112800 len:8 PRP1 0x0 PRP2 0x0 00:23:54.661 [2024-02-14 19:24:11.238479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.661 [2024-02-14 19:24:11.238575] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e49ff0 was disconnected and freed. reset controller. 00:23:54.661 [2024-02-14 19:24:11.238683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.661 [2024-02-14 19:24:11.238707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.661 [2024-02-14 19:24:11.238723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.661 [2024-02-14 19:24:11.238749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.661 [2024-02-14 19:24:11.238764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.661 [2024-02-14 19:24:11.238776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.661 [2024-02-14 19:24:11.238790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.661 [2024-02-14 19:24:11.238802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.661 [2024-02-14 19:24:11.238814] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fba0 is same with the state(5) to be set 00:23:54.661 [2024-02-14 19:24:11.239940] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:54.661 [2024-02-14 19:24:11.239989] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4fba0 (9): Bad file descriptor 00:23:54.661 [2024-02-14 19:24:11.240108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.661 [2024-02-14 19:24:11.240166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.661 [2024-02-14 19:24:11.240191] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4fba0 with addr=10.0.0.2, port=4421 00:23:54.661 [2024-02-14 19:24:11.240206] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4fba0 is same with the state(5) to be set 00:23:54.661 [2024-02-14 19:24:11.240232] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4fba0 (9): Bad file descriptor 00:23:54.661 [2024-02-14 19:24:11.240255] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:54.661 [2024-02-14 19:24:11.240277] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:54.661 [2024-02-14 19:24:11.240291] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:54.661 [2024-02-14 19:24:11.250583] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.661 [2024-02-14 19:24:11.250617] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:54.661 [2024-02-14 19:24:21.303249] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:54.661 Received shutdown signal, test time was about 54.774554 seconds 00:23:54.661 00:23:54.661 Latency(us) 00:23:54.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.661 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:54.661 Verification LBA range: start 0x0 length 0x4000 00:23:54.661 Nvme0n1 : 54.77 11920.28 46.56 0.00 0.00 10721.91 834.09 7015926.69 00:23:54.661 =================================================================================================================== 00:23:54.661 Total : 11920.28 46.56 0.00 0.00 10721.91 834.09 7015926.69 00:23:54.661 19:24:31 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:54.661 19:24:31 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:23:54.661 19:24:31 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:54.661 19:24:31 -- host/multipath.sh@125 -- # nvmftestfini 00:23:54.661 19:24:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:54.661 19:24:31 -- nvmf/common.sh@116 -- # sync 00:23:54.661 19:24:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:54.661 19:24:31 -- nvmf/common.sh@119 -- # set +e 00:23:54.661 19:24:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:54.661 19:24:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:54.661 rmmod nvme_tcp 00:23:54.661 rmmod nvme_fabrics 00:23:54.661 rmmod nvme_keyring 00:23:54.661 19:24:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:54.661 19:24:31 -- nvmf/common.sh@123 -- # set -e 00:23:54.661 19:24:31 -- nvmf/common.sh@124 -- # return 0 00:23:54.661 19:24:31 -- nvmf/common.sh@477 -- # '[' -n 86551 ']' 00:23:54.661 19:24:31 -- nvmf/common.sh@478 -- # killprocess 86551 00:23:54.661 19:24:31 -- common/autotest_common.sh@924 -- # '[' -z 86551 ']' 00:23:54.661 19:24:31 -- common/autotest_common.sh@928 -- # kill -0 86551 00:23:54.661 19:24:31 -- common/autotest_common.sh@929 -- # uname 00:23:54.661 19:24:31 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:23:54.661 19:24:31 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 86551 00:23:54.661 killing process with pid 86551 00:23:54.661 19:24:31 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:23:54.661 19:24:31 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:23:54.661 19:24:31 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 86551' 00:23:54.661 19:24:31 -- common/autotest_common.sh@943 -- # kill 86551 00:23:54.661 19:24:31 -- common/autotest_common.sh@948 -- # wait 86551 00:23:54.920 19:24:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:54.920 19:24:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:54.920 19:24:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:54.920 19:24:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:54.920 19:24:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:54.920 19:24:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.920 19:24:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:54.920 19:24:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.920 19:24:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:54.920 00:23:54.920 real 1m0.824s 00:23:54.920 user 2m49.760s 00:23:54.920 sys 0m14.620s 00:23:54.920 19:24:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:54.920 19:24:32 -- common/autotest_common.sh@10 -- # set +x 00:23:54.920 ************************************ 00:23:54.920 END TEST nvmf_multipath 00:23:54.920 ************************************ 00:23:54.920 19:24:32 -- nvmf/nvmf.sh@116 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:23:54.920 19:24:32 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:23:54.920 19:24:32 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:23:54.920 19:24:32 -- common/autotest_common.sh@10 -- # set +x 00:23:54.920 ************************************ 00:23:54.920 START TEST nvmf_timeout 00:23:54.920 ************************************ 00:23:54.920 19:24:32 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:23:55.179 * Looking for test storage... 00:23:55.179 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:55.179 19:24:32 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:55.179 19:24:32 -- nvmf/common.sh@7 -- # uname -s 00:23:55.179 19:24:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:55.179 19:24:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:55.179 19:24:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:55.179 19:24:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:55.179 19:24:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:55.179 19:24:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:55.179 19:24:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:55.179 19:24:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:55.179 19:24:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:55.179 19:24:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:55.179 19:24:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:23:55.179 19:24:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:23:55.179 19:24:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:55.179 19:24:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:55.179 19:24:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:55.179 19:24:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:55.179 19:24:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.179 19:24:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.179 19:24:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.179 19:24:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.179 19:24:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.179 19:24:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.179 19:24:32 -- paths/export.sh@5 -- # export PATH 00:23:55.179 19:24:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.179 19:24:32 -- nvmf/common.sh@46 -- # : 0 00:23:55.179 19:24:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:55.179 19:24:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:55.179 19:24:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:55.179 19:24:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:55.179 19:24:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:55.179 19:24:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:55.179 19:24:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:55.179 19:24:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:55.179 19:24:32 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:55.179 19:24:32 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:55.179 19:24:32 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:55.179 19:24:32 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:55.179 19:24:32 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:55.179 19:24:32 -- host/timeout.sh@19 -- # nvmftestinit 00:23:55.179 19:24:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:55.179 19:24:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:55.179 19:24:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:55.179 19:24:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:55.179 19:24:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:55.179 19:24:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.179 19:24:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:55.179 19:24:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.179 19:24:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:55.179 19:24:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:55.179 19:24:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:55.179 19:24:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:55.179 19:24:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:55.179 19:24:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:55.179 19:24:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:55.179 19:24:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:55.179 19:24:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:55.180 19:24:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:55.180 19:24:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:55.180 19:24:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:55.180 19:24:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:55.180 19:24:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:55.180 19:24:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:55.180 19:24:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:55.180 19:24:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:55.180 19:24:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:55.180 19:24:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:55.180 19:24:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:55.180 Cannot find device "nvmf_tgt_br" 00:23:55.180 19:24:32 -- nvmf/common.sh@154 -- # true 00:23:55.180 19:24:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:55.180 Cannot find device "nvmf_tgt_br2" 00:23:55.180 19:24:32 -- nvmf/common.sh@155 -- # true 00:23:55.180 19:24:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:55.180 19:24:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:55.180 Cannot find device "nvmf_tgt_br" 00:23:55.180 19:24:32 -- nvmf/common.sh@157 -- # true 00:23:55.180 19:24:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:55.180 Cannot find device "nvmf_tgt_br2" 00:23:55.180 19:24:32 -- nvmf/common.sh@158 -- # true 00:23:55.180 19:24:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:55.180 19:24:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:55.180 19:24:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:55.180 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:55.180 19:24:32 -- nvmf/common.sh@161 -- # true 00:23:55.180 19:24:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:55.180 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:55.180 19:24:32 -- nvmf/common.sh@162 -- # true 00:23:55.180 19:24:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:55.180 19:24:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:55.180 19:24:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:55.180 19:24:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:55.180 19:24:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:55.180 19:24:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:55.180 19:24:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:55.180 19:24:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:55.180 19:24:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:55.180 19:24:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:55.180 19:24:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:55.439 19:24:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:55.439 19:24:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:55.439 19:24:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:55.439 19:24:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:55.439 19:24:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:55.439 19:24:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:55.439 19:24:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:55.439 19:24:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:55.439 19:24:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:55.439 19:24:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:55.439 19:24:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:55.439 19:24:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:55.439 19:24:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:55.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:55.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:23:55.439 00:23:55.439 --- 10.0.0.2 ping statistics --- 00:23:55.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.439 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:23:55.439 19:24:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:55.439 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:55.439 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:23:55.439 00:23:55.439 --- 10.0.0.3 ping statistics --- 00:23:55.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.439 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:23:55.439 19:24:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:55.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:55.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:23:55.439 00:23:55.439 --- 10.0.0.1 ping statistics --- 00:23:55.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.439 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:23:55.439 19:24:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:55.439 19:24:32 -- nvmf/common.sh@421 -- # return 0 00:23:55.439 19:24:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:55.439 19:24:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:55.439 19:24:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:55.439 19:24:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:55.439 19:24:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:55.439 19:24:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:55.439 19:24:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:55.439 19:24:32 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:23:55.439 19:24:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:55.439 19:24:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:55.439 19:24:32 -- common/autotest_common.sh@10 -- # set +x 00:23:55.439 19:24:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:55.439 19:24:32 -- nvmf/common.sh@469 -- # nvmfpid=87904 00:23:55.439 19:24:32 -- nvmf/common.sh@470 -- # waitforlisten 87904 00:23:55.439 19:24:32 -- common/autotest_common.sh@817 -- # '[' -z 87904 ']' 00:23:55.439 19:24:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.439 19:24:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:55.439 19:24:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.439 19:24:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:55.439 19:24:32 -- common/autotest_common.sh@10 -- # set +x 00:23:55.439 [2024-02-14 19:24:32.788954] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:23:55.439 [2024-02-14 19:24:32.789037] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.698 [2024-02-14 19:24:32.926239] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:55.698 [2024-02-14 19:24:33.020737] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:55.698 [2024-02-14 19:24:33.020911] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.698 [2024-02-14 19:24:33.020929] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.698 [2024-02-14 19:24:33.020941] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.698 [2024-02-14 19:24:33.021059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.698 [2024-02-14 19:24:33.021352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.264 19:24:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:56.264 19:24:33 -- common/autotest_common.sh@850 -- # return 0 00:23:56.264 19:24:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:56.264 19:24:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:56.264 19:24:33 -- common/autotest_common.sh@10 -- # set +x 00:23:56.264 19:24:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:56.264 19:24:33 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:56.264 19:24:33 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:56.523 [2024-02-14 19:24:33.933134] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:56.781 19:24:33 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:56.781 Malloc0 00:23:56.781 19:24:34 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:57.038 19:24:34 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:57.297 19:24:34 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:57.297 [2024-02-14 19:24:34.689756] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.297 19:24:34 -- host/timeout.sh@32 -- # bdevperf_pid=87991 00:23:57.297 19:24:34 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:23:57.297 19:24:34 -- host/timeout.sh@34 -- # waitforlisten 87991 /var/tmp/bdevperf.sock 00:23:57.297 19:24:34 -- common/autotest_common.sh@817 -- # '[' -z 87991 ']' 00:23:57.297 19:24:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:57.297 19:24:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:57.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:57.297 19:24:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:57.297 19:24:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:57.297 19:24:34 -- common/autotest_common.sh@10 -- # set +x 00:23:57.563 [2024-02-14 19:24:34.754314] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:23:57.563 [2024-02-14 19:24:34.754412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87991 ] 00:23:57.563 [2024-02-14 19:24:34.890254] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.845 [2024-02-14 19:24:34.993952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:58.425 19:24:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:58.425 19:24:35 -- common/autotest_common.sh@850 -- # return 0 00:23:58.425 19:24:35 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:58.425 19:24:35 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:58.993 NVMe0n1 00:23:58.993 19:24:36 -- host/timeout.sh@51 -- # rpc_pid=88043 00:23:58.993 19:24:36 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:58.993 19:24:36 -- host/timeout.sh@53 -- # sleep 1 00:23:58.993 Running I/O for 10 seconds... 00:23:59.932 19:24:37 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:59.932 [2024-02-14 19:24:37.334195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.932 [2024-02-14 19:24:37.334257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.932 [2024-02-14 19:24:37.334284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.932 [2024-02-14 19:24:37.334292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.932 [2024-02-14 19:24:37.334299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.932 [2024-02-14 19:24:37.334307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.932 [2024-02-14 19:24:37.334315] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334338] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334441] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334599] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334607] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334651] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.334704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224c090 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.335208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.933 [2024-02-14 19:24:37.335249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.933 [2024-02-14 19:24:37.335263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.933 [2024-02-14 19:24:37.335273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.933 [2024-02-14 19:24:37.335283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.933 [2024-02-14 19:24:37.335291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.933 [2024-02-14 19:24:37.335313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.933 [2024-02-14 19:24:37.335322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.933 [2024-02-14 19:24:37.335331] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbc3e0 is same with the state(5) to be set 00:23:59.933 [2024-02-14 19:24:37.335667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:121464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.933 [2024-02-14 19:24:37.335776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.933 [2024-02-14 19:24:37.335800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:121472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.933 [2024-02-14 19:24:37.335811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.933 [2024-02-14 19:24:37.336435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:121480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.933 [2024-02-14 19:24:37.336460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.933 [2024-02-14 19:24:37.336582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:121496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.933 [2024-02-14 19:24:37.336599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.933 [2024-02-14 19:24:37.336610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.933 [2024-02-14 19:24:37.336620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.933 [2024-02-14 19:24:37.336631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:121520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.933 [2024-02-14 19:24:37.336641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.933 [2024-02-14 19:24:37.336907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.933 [2024-02-14 19:24:37.336927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.933 [2024-02-14 19:24:37.336939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.933 [2024-02-14 19:24:37.336948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.933 [2024-02-14 19:24:37.336960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.933 [2024-02-14 19:24:37.336969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.933 [2024-02-14 19:24:37.336980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.933 [2024-02-14 19:24:37.336989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.933 [2024-02-14 19:24:37.336999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.933 [2024-02-14 19:24:37.337008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.933 [2024-02-14 19:24:37.337018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.933 [2024-02-14 19:24:37.337027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.933 [2024-02-14 19:24:37.337039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.933 [2024-02-14 19:24:37.337048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.933 [2024-02-14 19:24:37.337058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.933 [2024-02-14 19:24:37.337067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.933 [2024-02-14 19:24:37.337077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:121008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.933 [2024-02-14 19:24:37.337086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.933 [2024-02-14 19:24:37.337194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.934 [2024-02-14 19:24:37.337208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.337218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:121024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.934 [2024-02-14 19:24:37.337228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.337239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:121592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.934 [2024-02-14 19:24:37.337248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.337344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:121608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.934 [2024-02-14 19:24:37.337357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.337368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:121624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.934 [2024-02-14 19:24:37.337378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.337389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:121672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.934 [2024-02-14 19:24:37.337398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.337409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.934 [2024-02-14 19:24:37.337418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.337429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:121688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.934 [2024-02-14 19:24:37.337438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.337555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:121696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.934 [2024-02-14 19:24:37.337573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.337586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:121048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.934 [2024-02-14 19:24:37.337743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.337809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:121056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.934 [2024-02-14 19:24:37.337820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.337831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.934 [2024-02-14 19:24:37.337840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.337852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:121096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.934 [2024-02-14 19:24:37.337862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.337973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.934 [2024-02-14 19:24:37.337987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.337998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.934 [2024-02-14 19:24:37.338008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.338019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.934 [2024-02-14 19:24:37.338028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.338038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.934 [2024-02-14 19:24:37.338154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.338170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:121704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.934 [2024-02-14 19:24:37.338179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.338190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.934 [2024-02-14 19:24:37.338302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.338317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.934 [2024-02-14 19:24:37.338326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.338336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:121728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.934 [2024-02-14 19:24:37.338427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.338442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:121736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.934 [2024-02-14 19:24:37.338451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.338462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:121744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.934 [2024-02-14 19:24:37.338603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.338705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:121752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.934 [2024-02-14 19:24:37.338716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.338727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:121760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.934 [2024-02-14 19:24:37.338737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.338748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:121768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.934 [2024-02-14 19:24:37.339038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.339177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:121776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.934 [2024-02-14 19:24:37.339190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.339572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.934 [2024-02-14 19:24:37.339594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.339607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:121792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.934 [2024-02-14 19:24:37.339617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.339627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.934 [2024-02-14 19:24:37.339636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.339647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.934 [2024-02-14 19:24:37.339656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.339667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:121816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.934 [2024-02-14 19:24:37.339677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.339688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:121824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.934 [2024-02-14 19:24:37.339698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.339709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.934 [2024-02-14 19:24:37.339718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.339729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:121840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.934 [2024-02-14 19:24:37.339825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.339839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:121848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.934 [2024-02-14 19:24:37.339986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.340239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:121856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.934 [2024-02-14 19:24:37.340261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.340274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:121864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.934 [2024-02-14 19:24:37.340284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.934 [2024-02-14 19:24:37.340295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:121872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.934 [2024-02-14 19:24:37.340305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.340316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.935 [2024-02-14 19:24:37.340325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.340336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:121184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.935 [2024-02-14 19:24:37.340345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.340356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.935 [2024-02-14 19:24:37.340365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.340376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:121216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.935 [2024-02-14 19:24:37.340385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.340396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.935 [2024-02-14 19:24:37.340411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.340423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.935 [2024-02-14 19:24:37.340432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.340443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:121272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.935 [2024-02-14 19:24:37.340452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.340652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.935 [2024-02-14 19:24:37.340668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.340679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.935 [2024-02-14 19:24:37.340688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.340699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:121312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.935 [2024-02-14 19:24:37.340708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.340719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.935 [2024-02-14 19:24:37.340815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.340831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:121368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.935 [2024-02-14 19:24:37.340841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.340851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:121376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.935 [2024-02-14 19:24:37.340861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.340871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.935 [2024-02-14 19:24:37.340880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.340891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:121416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.935 [2024-02-14 19:24:37.340900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.340911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.935 [2024-02-14 19:24:37.341002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.341017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.935 [2024-02-14 19:24:37.341026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.341037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.935 [2024-02-14 19:24:37.341047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.341057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:121896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.935 [2024-02-14 19:24:37.341066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.341160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.935 [2024-02-14 19:24:37.341171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.341182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:121912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.935 [2024-02-14 19:24:37.341196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.341207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:121920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.935 [2024-02-14 19:24:37.341216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.341226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.935 [2024-02-14 19:24:37.341235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.341246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:121936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.935 [2024-02-14 19:24:37.341255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.341347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:121944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.935 [2024-02-14 19:24:37.341359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.341370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.935 [2024-02-14 19:24:37.341379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.341390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.935 [2024-02-14 19:24:37.341398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.341409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:121968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.935 [2024-02-14 19:24:37.341554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.341652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.935 [2024-02-14 19:24:37.341669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.341680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.935 [2024-02-14 19:24:37.341689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.341700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.935 [2024-02-14 19:24:37.342012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.342039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:122000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.935 [2024-02-14 19:24:37.342050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.342062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:122008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.935 [2024-02-14 19:24:37.342071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.342082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:122016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.935 [2024-02-14 19:24:37.342092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.342103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:122024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.935 [2024-02-14 19:24:37.342112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.342138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.935 [2024-02-14 19:24:37.342147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.342158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:122040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.935 [2024-02-14 19:24:37.342173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.342467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:122048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.935 [2024-02-14 19:24:37.342480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.935 [2024-02-14 19:24:37.342802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.936 [2024-02-14 19:24:37.342905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.342919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:121456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.936 [2024-02-14 19:24:37.342929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.342940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.936 [2024-02-14 19:24:37.342950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.342961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:121512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.936 [2024-02-14 19:24:37.342970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.342992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.936 [2024-02-14 19:24:37.343002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.343013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:121536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.936 [2024-02-14 19:24:37.343023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.343034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:121544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.936 [2024-02-14 19:24:37.343043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.343147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:121576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.936 [2024-02-14 19:24:37.343160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.343172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:122056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.936 [2024-02-14 19:24:37.343302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.343317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:122064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.936 [2024-02-14 19:24:37.343418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.343450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:122072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.936 [2024-02-14 19:24:37.343460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.343607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:122080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.936 [2024-02-14 19:24:37.343761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.344049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:122088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.936 [2024-02-14 19:24:37.344148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.344163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:122096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.936 [2024-02-14 19:24:37.344173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.344184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:122104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.936 [2024-02-14 19:24:37.344193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.344203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.936 [2024-02-14 19:24:37.344212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.344222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:122120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.936 [2024-02-14 19:24:37.344231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.344468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:122128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.936 [2024-02-14 19:24:37.344500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.344531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:122136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.936 [2024-02-14 19:24:37.344540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.344552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.936 [2024-02-14 19:24:37.344561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.344571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:122152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.936 [2024-02-14 19:24:37.344790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.344814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:122160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.936 [2024-02-14 19:24:37.344825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.344836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.936 [2024-02-14 19:24:37.344846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.344857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.936 [2024-02-14 19:24:37.344866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.344877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:122184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.936 [2024-02-14 19:24:37.344977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.344993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:122192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.936 [2024-02-14 19:24:37.345003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.345014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.936 [2024-02-14 19:24:37.345242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.345265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:122208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.936 [2024-02-14 19:24:37.345275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.345286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.936 [2024-02-14 19:24:37.345296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.345308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:121600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.936 [2024-02-14 19:24:37.345317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.345328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.936 [2024-02-14 19:24:37.345570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.345595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:121632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.936 [2024-02-14 19:24:37.345606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.345711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:121640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.936 [2024-02-14 19:24:37.345723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.936 [2024-02-14 19:24:37.345734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.196 [2024-02-14 19:24:37.345855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.196 [2024-02-14 19:24:37.345877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:121656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.196 [2024-02-14 19:24:37.345973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.196 [2024-02-14 19:24:37.345987] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2003400 is same with the state(5) to be set 00:24:00.196 [2024-02-14 19:24:37.346000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:00.196 [2024-02-14 19:24:37.346008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:00.196 [2024-02-14 19:24:37.346017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121664 len:8 PRP1 0x0 PRP2 0x0 00:24:00.196 [2024-02-14 19:24:37.346027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.196 [2024-02-14 19:24:37.346082] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2003400 was disconnected and freed. reset controller. 00:24:00.196 [2024-02-14 19:24:37.346140] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbc3e0 (9): Bad file descriptor 00:24:00.196 [2024-02-14 19:24:37.346377] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.196 [2024-02-14 19:24:37.346474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.196 [2024-02-14 19:24:37.346567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.196 [2024-02-14 19:24:37.346587] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbc3e0 with addr=10.0.0.2, port=4420 00:24:00.196 [2024-02-14 19:24:37.346598] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbc3e0 is same with the state(5) to be set 00:24:00.196 [2024-02-14 19:24:37.346617] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbc3e0 (9): Bad file descriptor 00:24:00.196 [2024-02-14 19:24:37.346633] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.196 [2024-02-14 19:24:37.346643] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.196 [2024-02-14 19:24:37.346654] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.196 [2024-02-14 19:24:37.346674] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.196 [2024-02-14 19:24:37.346685] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.196 19:24:37 -- host/timeout.sh@56 -- # sleep 2 00:24:02.101 [2024-02-14 19:24:39.346878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.101 [2024-02-14 19:24:39.347005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.101 [2024-02-14 19:24:39.347024] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbc3e0 with addr=10.0.0.2, port=4420 00:24:02.101 [2024-02-14 19:24:39.347051] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbc3e0 is same with the state(5) to be set 00:24:02.101 [2024-02-14 19:24:39.347106] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbc3e0 (9): Bad file descriptor 00:24:02.101 [2024-02-14 19:24:39.347160] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.101 [2024-02-14 19:24:39.347175] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.101 [2024-02-14 19:24:39.347192] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.101 [2024-02-14 19:24:39.347252] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.101 [2024-02-14 19:24:39.347267] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.101 19:24:39 -- host/timeout.sh@57 -- # get_controller 00:24:02.101 19:24:39 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:02.101 19:24:39 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:02.360 19:24:39 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:24:02.360 19:24:39 -- host/timeout.sh@58 -- # get_bdev 00:24:02.360 19:24:39 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:02.360 19:24:39 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:02.619 19:24:39 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:24:02.619 19:24:39 -- host/timeout.sh@61 -- # sleep 5 00:24:03.996 [2024-02-14 19:24:41.347572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.996 [2024-02-14 19:24:41.347680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.996 [2024-02-14 19:24:41.347698] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbc3e0 with addr=10.0.0.2, port=4420 00:24:03.996 [2024-02-14 19:24:41.347709] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbc3e0 is same with the state(5) to be set 00:24:03.996 [2024-02-14 19:24:41.347732] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbc3e0 (9): Bad file descriptor 00:24:03.996 [2024-02-14 19:24:41.347749] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:03.996 [2024-02-14 19:24:41.347758] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:03.996 [2024-02-14 19:24:41.347768] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:03.996 [2024-02-14 19:24:41.347794] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:03.996 [2024-02-14 19:24:41.348151] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.531 [2024-02-14 19:24:43.348215] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.099 00:24:07.099 Latency(us) 00:24:07.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.099 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:07.099 Verification LBA range: start 0x0 length 0x4000 00:24:07.099 NVMe0n1 : 8.10 1870.16 7.31 15.80 0.00 67847.69 2308.65 7046430.72 00:24:07.099 =================================================================================================================== 00:24:07.099 Total : 1870.16 7.31 15.80 0.00 67847.69 2308.65 7046430.72 00:24:07.099 0 00:24:07.668 19:24:44 -- host/timeout.sh@62 -- # get_controller 00:24:07.668 19:24:44 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:07.668 19:24:44 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:07.668 19:24:45 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:24:07.668 19:24:45 -- host/timeout.sh@63 -- # get_bdev 00:24:07.668 19:24:45 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:07.668 19:24:45 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:07.927 19:24:45 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:24:07.927 19:24:45 -- host/timeout.sh@65 -- # wait 88043 00:24:07.927 19:24:45 -- host/timeout.sh@67 -- # killprocess 87991 00:24:07.927 19:24:45 -- common/autotest_common.sh@924 -- # '[' -z 87991 ']' 00:24:07.927 19:24:45 -- common/autotest_common.sh@928 -- # kill -0 87991 00:24:07.927 19:24:45 -- common/autotest_common.sh@929 -- # uname 00:24:07.927 19:24:45 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:24:07.927 19:24:45 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 87991 00:24:08.186 19:24:45 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:24:08.186 19:24:45 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:24:08.186 killing process with pid 87991 00:24:08.186 19:24:45 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 87991' 00:24:08.186 19:24:45 -- common/autotest_common.sh@943 -- # kill 87991 00:24:08.186 Received shutdown signal, test time was about 9.111080 seconds 00:24:08.186 00:24:08.186 Latency(us) 00:24:08.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.186 =================================================================================================================== 00:24:08.186 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:08.186 19:24:45 -- common/autotest_common.sh@948 -- # wait 87991 00:24:08.186 19:24:45 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:08.445 [2024-02-14 19:24:45.807444] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.445 19:24:45 -- host/timeout.sh@74 -- # bdevperf_pid=88195 00:24:08.445 19:24:45 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:08.445 19:24:45 -- host/timeout.sh@76 -- # waitforlisten 88195 /var/tmp/bdevperf.sock 00:24:08.445 19:24:45 -- common/autotest_common.sh@817 -- # '[' -z 88195 ']' 00:24:08.445 19:24:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:08.445 19:24:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:08.445 19:24:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:08.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:08.445 19:24:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:08.445 19:24:45 -- common/autotest_common.sh@10 -- # set +x 00:24:08.703 [2024-02-14 19:24:45.882589] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:24:08.703 [2024-02-14 19:24:45.882694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88195 ] 00:24:08.703 [2024-02-14 19:24:46.019655] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.703 [2024-02-14 19:24:46.099481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.637 19:24:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:09.637 19:24:46 -- common/autotest_common.sh@850 -- # return 0 00:24:09.637 19:24:46 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:09.637 19:24:46 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:24:09.894 NVMe0n1 00:24:09.894 19:24:47 -- host/timeout.sh@84 -- # rpc_pid=88243 00:24:09.894 19:24:47 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:09.894 19:24:47 -- host/timeout.sh@86 -- # sleep 1 00:24:09.894 Running I/O for 10 seconds... 00:24:10.829 19:24:48 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:11.091 [2024-02-14 19:24:48.407347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407466] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407561] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407569] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407577] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407617] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407641] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407671] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407737] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407755] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407811] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.407819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c7e0 is same with the state(5) to be set 00:24:11.091 [2024-02-14 19:24:48.408406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.091 [2024-02-14 19:24:48.408450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.091 [2024-02-14 19:24:48.408472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.091 [2024-02-14 19:24:48.408483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.091 [2024-02-14 19:24:48.408541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:125864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.091 [2024-02-14 19:24:48.408552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.091 [2024-02-14 19:24:48.408563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:125872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.091 [2024-02-14 19:24:48.408573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.091 [2024-02-14 19:24:48.408584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.091 [2024-02-14 19:24:48.408593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.091 [2024-02-14 19:24:48.408605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.091 [2024-02-14 19:24:48.408614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.091 [2024-02-14 19:24:48.408625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.091 [2024-02-14 19:24:48.408751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.091 [2024-02-14 19:24:48.408770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:125912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.091 [2024-02-14 19:24:48.408780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.091 [2024-02-14 19:24:48.408791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:125224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.091 [2024-02-14 19:24:48.408801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.091 [2024-02-14 19:24:48.408812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:125248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.091 [2024-02-14 19:24:48.408822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.409054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.092 [2024-02-14 19:24:48.409078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.409090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.092 [2024-02-14 19:24:48.409100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.409111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:125280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.092 [2024-02-14 19:24:48.409413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.409484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.092 [2024-02-14 19:24:48.409541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.409553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.092 [2024-02-14 19:24:48.409562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.409574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:125344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.092 [2024-02-14 19:24:48.409584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.409596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:125360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.092 [2024-02-14 19:24:48.409606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.409618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:125368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.092 [2024-02-14 19:24:48.409627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.409639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:125392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.092 [2024-02-14 19:24:48.409648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.410010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:125400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.092 [2024-02-14 19:24:48.410032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.410045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.092 [2024-02-14 19:24:48.410054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.410065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.092 [2024-02-14 19:24:48.410074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.410084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:125432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.092 [2024-02-14 19:24:48.410093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.410103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:125464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.092 [2024-02-14 19:24:48.410112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.410123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:125928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.092 [2024-02-14 19:24:48.410133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.410144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:125936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.092 [2024-02-14 19:24:48.410152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.410537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:125944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.092 [2024-02-14 19:24:48.410558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.410570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:125952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.092 [2024-02-14 19:24:48.410580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.410592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.092 [2024-02-14 19:24:48.410601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.410613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:125968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.092 [2024-02-14 19:24:48.410622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.410633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.092 [2024-02-14 19:24:48.410643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.410654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.092 [2024-02-14 19:24:48.410663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.410674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:125992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.092 [2024-02-14 19:24:48.411051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.411067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:126000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.092 [2024-02-14 19:24:48.411077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.411088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.092 [2024-02-14 19:24:48.411097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.411108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:126016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.092 [2024-02-14 19:24:48.411117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.411128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:126024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.092 [2024-02-14 19:24:48.411137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.411148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:126032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.092 [2024-02-14 19:24:48.411156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.411167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:126040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.092 [2024-02-14 19:24:48.411551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.411573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.092 [2024-02-14 19:24:48.411583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.411594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:126056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.092 [2024-02-14 19:24:48.411603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.411614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:126064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.092 [2024-02-14 19:24:48.411623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.092 [2024-02-14 19:24:48.411634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.093 [2024-02-14 19:24:48.411643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.411653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:126080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.093 [2024-02-14 19:24:48.411662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.411672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:126088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.093 [2024-02-14 19:24:48.411681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.411910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.093 [2024-02-14 19:24:48.411930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.411943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:126104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.093 [2024-02-14 19:24:48.412076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.412232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.093 [2024-02-14 19:24:48.412498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.412518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:126120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.093 [2024-02-14 19:24:48.412528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.412539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:126128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.093 [2024-02-14 19:24:48.412656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.412680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:126136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.093 [2024-02-14 19:24:48.412691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.412977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:126144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.093 [2024-02-14 19:24:48.413266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.413509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.093 [2024-02-14 19:24:48.413530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.413543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.093 [2024-02-14 19:24:48.413553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.413564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.093 [2024-02-14 19:24:48.413574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.413586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:126176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.093 [2024-02-14 19:24:48.413595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.413606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:126184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.093 [2024-02-14 19:24:48.413616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.413627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:126192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.093 [2024-02-14 19:24:48.413636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.413768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.093 [2024-02-14 19:24:48.413886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.413899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.093 [2024-02-14 19:24:48.413910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.414198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:126216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.093 [2024-02-14 19:24:48.414326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.414342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:126224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.093 [2024-02-14 19:24:48.414351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.414583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.093 [2024-02-14 19:24:48.414597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.414609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:125472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.093 [2024-02-14 19:24:48.414618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.414629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:125504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.093 [2024-02-14 19:24:48.414638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.414650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.093 [2024-02-14 19:24:48.414659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.414670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:125536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.093 [2024-02-14 19:24:48.414679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.414690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.093 [2024-02-14 19:24:48.414920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.414936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.093 [2024-02-14 19:24:48.414946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.415172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.093 [2024-02-14 19:24:48.415195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.415209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.093 [2024-02-14 19:24:48.415219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.415230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.093 [2024-02-14 19:24:48.415240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.415251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:125624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.093 [2024-02-14 19:24:48.415260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.415271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.093 [2024-02-14 19:24:48.415280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.415291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:125640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.093 [2024-02-14 19:24:48.415301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.415688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.093 [2024-02-14 19:24:48.415701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.415714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.093 [2024-02-14 19:24:48.415724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.415735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.093 [2024-02-14 19:24:48.415744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.093 [2024-02-14 19:24:48.415755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:125688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.093 [2024-02-14 19:24:48.415765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.415776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:126240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.094 [2024-02-14 19:24:48.415785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.415796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-02-14 19:24:48.416158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.416173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:126256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-02-14 19:24:48.416184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.416195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-02-14 19:24:48.416205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.416216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-02-14 19:24:48.416225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.416237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-02-14 19:24:48.416246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.416258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-02-14 19:24:48.416267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.416613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-02-14 19:24:48.416627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.416639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:126304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.094 [2024-02-14 19:24:48.416649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.416660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:126312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-02-14 19:24:48.416669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.416913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:126320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.094 [2024-02-14 19:24:48.416932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.416944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-02-14 19:24:48.416954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.416965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-02-14 19:24:48.416974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.416985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-02-14 19:24:48.416994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.417211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-02-14 19:24:48.417229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.417242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-02-14 19:24:48.417252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.417263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-02-14 19:24:48.417272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.417283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.094 [2024-02-14 19:24:48.417293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.417522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-02-14 19:24:48.417534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.417545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-02-14 19:24:48.417555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.417566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-02-14 19:24:48.417575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.417586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-02-14 19:24:48.417840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.417866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-02-14 19:24:48.417877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.417889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-02-14 19:24:48.417898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.417909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-02-14 19:24:48.418155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.418178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-02-14 19:24:48.418189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.418201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-02-14 19:24:48.418210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.418221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:126392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.094 [2024-02-14 19:24:48.418230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.418482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:126400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.094 [2024-02-14 19:24:48.418512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.418526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-02-14 19:24:48.418535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.418546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-02-14 19:24:48.418556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.418568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.094 [2024-02-14 19:24:48.418577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.418837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.094 [2024-02-14 19:24:48.418850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.094 [2024-02-14 19:24:48.418861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:126440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.094 [2024-02-14 19:24:48.418872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.095 [2024-02-14 19:24:48.418883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.095 [2024-02-14 19:24:48.418892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.095 [2024-02-14 19:24:48.419156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.095 [2024-02-14 19:24:48.419279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.095 [2024-02-14 19:24:48.419301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.095 [2024-02-14 19:24:48.419311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.095 [2024-02-14 19:24:48.419542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.095 [2024-02-14 19:24:48.419562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.095 [2024-02-14 19:24:48.419574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-02-14 19:24:48.419584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.095 [2024-02-14 19:24:48.419595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-02-14 19:24:48.419604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.095 [2024-02-14 19:24:48.419614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:126496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.095 [2024-02-14 19:24:48.419623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.095 [2024-02-14 19:24:48.419761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-02-14 19:24:48.419891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.095 [2024-02-14 19:24:48.419912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-02-14 19:24:48.420159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.095 [2024-02-14 19:24:48.420182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:125816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-02-14 19:24:48.420192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.095 [2024-02-14 19:24:48.420204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-02-14 19:24:48.420213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.095 [2024-02-14 19:24:48.420224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-02-14 19:24:48.420234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.095 [2024-02-14 19:24:48.420477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-02-14 19:24:48.420506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.095 [2024-02-14 19:24:48.420519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.095 [2024-02-14 19:24:48.420529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.095 [2024-02-14 19:24:48.420563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:11.095 [2024-02-14 19:24:48.420706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:11.095 [2024-02-14 19:24:48.420853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125920 len:8 PRP1 0x0 PRP2 0x0 00:24:11.095 [2024-02-14 19:24:48.421001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.095 [2024-02-14 19:24:48.421258] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x667400 was disconnected and freed. reset controller. 00:24:11.095 [2024-02-14 19:24:48.421536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.095 [2024-02-14 19:24:48.421563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.095 [2024-02-14 19:24:48.421575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.095 [2024-02-14 19:24:48.421584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.095 [2024-02-14 19:24:48.421594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.095 [2024-02-14 19:24:48.421603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.095 [2024-02-14 19:24:48.421612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.095 [2024-02-14 19:24:48.421621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.095 [2024-02-14 19:24:48.421630] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6203e0 is same with the state(5) to be set 00:24:11.095 [2024-02-14 19:24:48.422024] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:11.095 [2024-02-14 19:24:48.422069] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6203e0 (9): Bad file descriptor 00:24:11.095 [2024-02-14 19:24:48.422356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.095 [2024-02-14 19:24:48.422419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.095 [2024-02-14 19:24:48.422436] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6203e0 with addr=10.0.0.2, port=4420 00:24:11.095 [2024-02-14 19:24:48.422446] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6203e0 is same with the state(5) to be set 00:24:11.095 [2024-02-14 19:24:48.422661] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6203e0 (9): Bad file descriptor 00:24:11.095 [2024-02-14 19:24:48.422697] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:11.095 [2024-02-14 19:24:48.422708] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:11.095 [2024-02-14 19:24:48.422719] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:11.095 [2024-02-14 19:24:48.422740] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.095 [2024-02-14 19:24:48.422978] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:11.095 19:24:48 -- host/timeout.sh@90 -- # sleep 1 00:24:12.032 [2024-02-14 19:24:49.423090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.032 [2024-02-14 19:24:49.423173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.032 [2024-02-14 19:24:49.423190] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6203e0 with addr=10.0.0.2, port=4420 00:24:12.032 [2024-02-14 19:24:49.423200] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6203e0 is same with the state(5) to be set 00:24:12.032 [2024-02-14 19:24:49.423218] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6203e0 (9): Bad file descriptor 00:24:12.032 [2024-02-14 19:24:49.423234] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:12.032 [2024-02-14 19:24:49.423243] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:12.032 [2024-02-14 19:24:49.423252] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:12.032 [2024-02-14 19:24:49.423277] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:12.032 [2024-02-14 19:24:49.423286] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:12.032 19:24:49 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:12.291 [2024-02-14 19:24:49.666092] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.291 19:24:49 -- host/timeout.sh@92 -- # wait 88243 00:24:13.226 [2024-02-14 19:24:50.439765] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:21.352 00:24:21.352 Latency(us) 00:24:21.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.352 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:21.352 Verification LBA range: start 0x0 length 0x4000 00:24:21.352 NVMe0n1 : 10.01 10252.44 40.05 0.00 0.00 12472.83 1623.51 3035150.89 00:24:21.352 =================================================================================================================== 00:24:21.352 Total : 10252.44 40.05 0.00 0.00 12472.83 1623.51 3035150.89 00:24:21.352 0 00:24:21.352 19:24:57 -- host/timeout.sh@97 -- # rpc_pid=88361 00:24:21.352 19:24:57 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:21.352 19:24:57 -- host/timeout.sh@98 -- # sleep 1 00:24:21.352 Running I/O for 10 seconds... 00:24:21.352 19:24:58 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:21.352 [2024-02-14 19:24:58.564735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.564786] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.564798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.564808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.564816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.564841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.564850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.564858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.564881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.564904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.564926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.564933] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.564940] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.564947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.564955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.564962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.564969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.564976] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.564983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.564990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.564997] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.565004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.565013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.565021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.565028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.565035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.565058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.565081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.565088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.565096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.565103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.565111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.565118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.565126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.565135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.565143] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.565150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.565158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.565166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.565174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.565181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.565189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.565197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.565204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.352 [2024-02-14 19:24:58.565212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.353 [2024-02-14 19:24:58.565219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.353 [2024-02-14 19:24:58.565227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.353 [2024-02-14 19:24:58.565235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.353 [2024-02-14 19:24:58.565243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.353 [2024-02-14 19:24:58.565250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.353 [2024-02-14 19:24:58.565258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.353 [2024-02-14 19:24:58.565265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.353 [2024-02-14 19:24:58.565272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.353 [2024-02-14 19:24:58.565280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.353 [2024-02-14 19:24:58.565288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.353 [2024-02-14 19:24:58.565296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.353 [2024-02-14 19:24:58.565303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299680 is same with the state(5) to be set 00:24:21.353 [2024-02-14 19:24:58.565834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:125824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.353 [2024-02-14 19:24:58.565888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.565925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:125192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.353 [2024-02-14 19:24:58.565936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.565947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:125216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.353 [2024-02-14 19:24:58.565956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.565967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:125224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.353 [2024-02-14 19:24:58.565976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.565986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:125248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.353 [2024-02-14 19:24:58.565995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.566005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:125264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.353 [2024-02-14 19:24:58.566014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.566040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:125272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.353 [2024-02-14 19:24:58.566478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.566595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.353 [2024-02-14 19:24:58.566608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.566619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:125312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.353 [2024-02-14 19:24:58.566630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.566641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.353 [2024-02-14 19:24:58.566651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.566662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:125864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.353 [2024-02-14 19:24:58.566671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.566682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:125872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.353 [2024-02-14 19:24:58.566692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.566703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.353 [2024-02-14 19:24:58.566712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.567116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.353 [2024-02-14 19:24:58.567141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.567155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.353 [2024-02-14 19:24:58.567165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.567176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.353 [2024-02-14 19:24:58.567186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.567197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:125328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.353 [2024-02-14 19:24:58.567208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.567219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:125344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.353 [2024-02-14 19:24:58.567229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.567240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:125360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.353 [2024-02-14 19:24:58.567250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.567261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.353 [2024-02-14 19:24:58.567397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.567633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:125392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.353 [2024-02-14 19:24:58.567645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.567657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:125400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.353 [2024-02-14 19:24:58.567668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.567679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:125408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.353 [2024-02-14 19:24:58.567688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.567700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:125416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.353 [2024-02-14 19:24:58.567709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.567720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.353 [2024-02-14 19:24:58.567730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.567741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.353 [2024-02-14 19:24:58.567751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.567761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.353 [2024-02-14 19:24:58.567771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.567782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.353 [2024-02-14 19:24:58.567791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.567803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:126016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.353 [2024-02-14 19:24:58.567812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.567824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:126024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.353 [2024-02-14 19:24:58.567849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.567874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:125432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.353 [2024-02-14 19:24:58.567968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.567984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:125464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.353 [2024-02-14 19:24:58.567993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.353 [2024-02-14 19:24:58.568004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:125472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.354 [2024-02-14 19:24:58.568014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.568025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:125504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.354 [2024-02-14 19:24:58.568033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.568044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:125520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.354 [2024-02-14 19:24:58.568269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.568284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:125536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.354 [2024-02-14 19:24:58.568293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.568304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:125544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.354 [2024-02-14 19:24:58.568313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.568324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:125584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.354 [2024-02-14 19:24:58.568333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.568343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.354 [2024-02-14 19:24:58.568352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.568621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.354 [2024-02-14 19:24:58.568634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.568646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.354 [2024-02-14 19:24:58.568656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.568667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:126056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.354 [2024-02-14 19:24:58.568676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.568791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:126064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.354 [2024-02-14 19:24:58.568817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.568829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.354 [2024-02-14 19:24:58.569069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.569086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:126080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.354 [2024-02-14 19:24:58.569095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.569107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:126088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.354 [2024-02-14 19:24:58.569117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.569127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.354 [2024-02-14 19:24:58.569136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.569147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:126104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.354 [2024-02-14 19:24:58.569156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.569166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.354 [2024-02-14 19:24:58.569176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.569186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:126120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.354 [2024-02-14 19:24:58.569195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.569206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:126128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.354 [2024-02-14 19:24:58.569215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.569226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:126136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.354 [2024-02-14 19:24:58.569235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.569245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:126144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.354 [2024-02-14 19:24:58.569266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.569290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.354 [2024-02-14 19:24:58.569299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.569309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:126160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.354 [2024-02-14 19:24:58.569317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.569326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.354 [2024-02-14 19:24:58.569335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.569346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:126176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.354 [2024-02-14 19:24:58.569355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.569365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:126184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.354 [2024-02-14 19:24:58.569374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.569384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:126192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.354 [2024-02-14 19:24:58.569393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.569403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.354 [2024-02-14 19:24:58.569412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.569422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.354 [2024-02-14 19:24:58.569431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.569441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.354 [2024-02-14 19:24:58.569449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.569459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.354 [2024-02-14 19:24:58.569467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.569477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.354 [2024-02-14 19:24:58.569486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.569513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.354 [2024-02-14 19:24:58.569539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.569564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.354 [2024-02-14 19:24:58.569574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.569586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:125672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.354 [2024-02-14 19:24:58.569596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.354 [2024-02-14 19:24:58.569608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.354 [2024-02-14 19:24:58.569618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.355 [2024-02-14 19:24:58.569629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.355 [2024-02-14 19:24:58.569640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.355 [2024-02-14 19:24:58.569651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:126216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.355 [2024-02-14 19:24:58.569661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.355 [2024-02-14 19:24:58.569672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:126224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.355 [2024-02-14 19:24:58.569682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.355 [2024-02-14 19:24:58.569693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.355 [2024-02-14 19:24:58.569703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.355 [2024-02-14 19:24:58.569714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:126240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.355 [2024-02-14 19:24:58.569724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.355 [2024-02-14 19:24:58.569736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.355 [2024-02-14 19:24:58.569745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.355 [2024-02-14 19:24:58.569756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:126256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.355 [2024-02-14 19:24:58.569766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.355 [2024-02-14 19:24:58.569777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.355 [2024-02-14 19:24:58.569786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.355 [2024-02-14 19:24:58.569797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.355 [2024-02-14 19:24:58.569807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.355 [2024-02-14 19:24:58.569820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:126280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.355 [2024-02-14 19:24:58.569829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.355 [2024-02-14 19:24:58.569840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.355 [2024-02-14 19:24:58.569850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.355 [2024-02-14 19:24:58.569861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.355 [2024-02-14 19:24:58.569885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.355 [2024-02-14 19:24:58.569910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.355 [2024-02-14 19:24:58.569935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.355 [2024-02-14 19:24:58.569946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.355 [2024-02-14 19:24:58.569954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.355 [2024-02-14 19:24:58.569965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.355 [2024-02-14 19:24:58.569974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.355 [2024-02-14 19:24:58.569984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.355 [2024-02-14 19:24:58.569993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.355 [2024-02-14 19:24:58.570018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.355 [2024-02-14 19:24:58.570035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.355 [2024-02-14 19:24:58.570046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.355 [2024-02-14 19:24:58.570055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.355 [2024-02-14 19:24:58.570065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.355 [2024-02-14 19:24:58.570373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.355 [2024-02-14 19:24:58.570393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.355 [2024-02-14 19:24:58.570403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.355 [2024-02-14 19:24:58.570704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:126304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.355 [2024-02-14 19:24:58.570717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.355 [2024-02-14 19:24:58.570728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.355 [2024-02-14 19:24:58.570956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.355 [2024-02-14 19:24:58.570978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.355 [2024-02-14 19:24:58.570988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.355 [2024-02-14 19:24:58.571025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.355 [2024-02-14 19:24:58.571036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.355 [2024-02-14 19:24:58.571048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.355 [2024-02-14 19:24:58.571058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.355 [2024-02-14 19:24:58.571069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:125816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.355 [2024-02-14 19:24:58.571079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.355 [2024-02-14 19:24:58.571090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.355 [2024-02-14 19:24:58.571105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.355 [2024-02-14 19:24:58.571117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:125840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.355 [2024-02-14 19:24:58.571126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.355 [2024-02-14 19:24:58.571137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.355 [2024-02-14 19:24:58.571153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.355 [2024-02-14 19:24:58.571165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.355 [2024-02-14 19:24:58.571174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.356 [2024-02-14 19:24:58.571186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.356 [2024-02-14 19:24:58.571195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.356 [2024-02-14 19:24:58.571206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:126328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.356 [2024-02-14 19:24:58.571216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.356 [2024-02-14 19:24:58.571227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.356 [2024-02-14 19:24:58.571237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.356 [2024-02-14 19:24:58.571248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.356 [2024-02-14 19:24:58.571258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.356 [2024-02-14 19:24:58.571269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.356 [2024-02-14 19:24:58.571279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.356 [2024-02-14 19:24:58.571290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.356 [2024-02-14 19:24:58.571300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.356 [2024-02-14 19:24:58.571326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.356 [2024-02-14 19:24:58.571347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.356 [2024-02-14 19:24:58.571357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:126376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.356 [2024-02-14 19:24:58.571366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.356 [2024-02-14 19:24:58.571393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.356 [2024-02-14 19:24:58.571401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.356 [2024-02-14 19:24:58.571411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:126392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.356 [2024-02-14 19:24:58.571420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.356 [2024-02-14 19:24:58.571430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:126400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.356 [2024-02-14 19:24:58.571439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.356 [2024-02-14 19:24:58.571449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.356 [2024-02-14 19:24:58.571457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.356 [2024-02-14 19:24:58.571468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.356 [2024-02-14 19:24:58.571480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.356 [2024-02-14 19:24:58.571491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:126424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.356 [2024-02-14 19:24:58.571511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.356 [2024-02-14 19:24:58.571538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.356 [2024-02-14 19:24:58.571554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.356 [2024-02-14 19:24:58.571565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:126440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.356 [2024-02-14 19:24:58.571586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.356 [2024-02-14 19:24:58.571598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.356 [2024-02-14 19:24:58.571608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.356 [2024-02-14 19:24:58.571621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.356 [2024-02-14 19:24:58.571630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.356 [2024-02-14 19:24:58.571642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.356 [2024-02-14 19:24:58.571652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.357 [2024-02-14 19:24:58.571663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.357 [2024-02-14 19:24:58.571672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.357 [2024-02-14 19:24:58.571689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.357 [2024-02-14 19:24:58.571699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.357 [2024-02-14 19:24:58.571711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.357 [2024-02-14 19:24:58.571720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.357 [2024-02-14 19:24:58.571731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.357 [2024-02-14 19:24:58.571740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.357 [2024-02-14 19:24:58.571751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.357 [2024-02-14 19:24:58.571761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.357 [2024-02-14 19:24:58.571772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.357 [2024-02-14 19:24:58.571781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.357 [2024-02-14 19:24:58.571792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.357 [2024-02-14 19:24:58.571802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.357 [2024-02-14 19:24:58.571812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.357 [2024-02-14 19:24:58.571822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.357 [2024-02-14 19:24:58.571832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.357 [2024-02-14 19:24:58.571842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.357 [2024-02-14 19:24:58.571867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.357 [2024-02-14 19:24:58.571896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.357 [2024-02-14 19:24:58.571906] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x695a70 is same with the state(5) to be set 00:24:21.357 [2024-02-14 19:24:58.571918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.357 [2024-02-14 19:24:58.572248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.357 [2024-02-14 19:24:58.572263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126000 len:8 PRP1 0x0 PRP2 0x0 00:24:21.357 [2024-02-14 19:24:58.572274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.357 [2024-02-14 19:24:58.572328] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x695a70 was disconnected and freed. reset controller. 00:24:21.357 [2024-02-14 19:24:58.572416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.357 [2024-02-14 19:24:58.572432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.357 [2024-02-14 19:24:58.572443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.357 [2024-02-14 19:24:58.572451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.357 [2024-02-14 19:24:58.572461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.357 [2024-02-14 19:24:58.572469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.357 [2024-02-14 19:24:58.572479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.357 [2024-02-14 19:24:58.572519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.357 [2024-02-14 19:24:58.572531] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6203e0 is same with the state(5) to be set 00:24:21.357 [2024-02-14 19:24:58.572747] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:21.357 [2024-02-14 19:24:58.572769] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6203e0 (9): Bad file descriptor 00:24:21.357 [2024-02-14 19:24:58.572890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.357 [2024-02-14 19:24:58.572950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.357 [2024-02-14 19:24:58.572965] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6203e0 with addr=10.0.0.2, port=4420 00:24:21.357 [2024-02-14 19:24:58.572975] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6203e0 is same with the state(5) to be set 00:24:21.357 [2024-02-14 19:24:58.572992] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6203e0 (9): Bad file descriptor 00:24:21.357 [2024-02-14 19:24:58.573007] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:21.357 [2024-02-14 19:24:58.573016] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:21.357 [2024-02-14 19:24:58.573025] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:21.357 [2024-02-14 19:24:58.573044] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.357 [2024-02-14 19:24:58.573053] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:21.357 19:24:58 -- host/timeout.sh@101 -- # sleep 3 00:24:22.297 [2024-02-14 19:24:59.573128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.297 [2024-02-14 19:24:59.573644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.297 [2024-02-14 19:24:59.573884] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6203e0 with addr=10.0.0.2, port=4420 00:24:22.297 [2024-02-14 19:24:59.574259] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6203e0 is same with the state(5) to be set 00:24:22.297 [2024-02-14 19:24:59.574660] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6203e0 (9): Bad file descriptor 00:24:22.297 [2024-02-14 19:24:59.575105] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.297 [2024-02-14 19:24:59.575529] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.297 [2024-02-14 19:24:59.575891] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.297 [2024-02-14 19:24:59.576166] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.297 [2024-02-14 19:24:59.576375] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.233 [2024-02-14 19:25:00.576867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.233 [2024-02-14 19:25:00.577454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.233 [2024-02-14 19:25:00.577720] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6203e0 with addr=10.0.0.2, port=4420 00:24:23.233 [2024-02-14 19:25:00.578158] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6203e0 is same with the state(5) to be set 00:24:23.233 [2024-02-14 19:25:00.578570] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6203e0 (9): Bad file descriptor 00:24:23.233 [2024-02-14 19:25:00.579055] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.233 [2024-02-14 19:25:00.579480] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.233 [2024-02-14 19:25:00.579864] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.233 [2024-02-14 19:25:00.580127] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.233 [2024-02-14 19:25:00.580155] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.169 [2024-02-14 19:25:01.580398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.169 [2024-02-14 19:25:01.580778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.169 [2024-02-14 19:25:01.581010] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6203e0 with addr=10.0.0.2, port=4420 00:24:24.169 [2024-02-14 19:25:01.581414] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6203e0 is same with the state(5) to be set 00:24:24.169 [2024-02-14 19:25:01.582046] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6203e0 (9): Bad file descriptor 00:24:24.169 [2024-02-14 19:25:01.582678] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.169 [2024-02-14 19:25:01.583111] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.169 [2024-02-14 19:25:01.583509] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.169 [2024-02-14 19:25:01.586135] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.428 [2024-02-14 19:25:01.586389] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controlle 19:25:01 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:24.428 r 00:24:24.428 [2024-02-14 19:25:01.815677] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.428 19:25:01 -- host/timeout.sh@103 -- # wait 88361 00:24:25.364 [2024-02-14 19:25:02.602389] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:30.685 00:24:30.685 Latency(us) 00:24:30.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.685 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:30.685 Verification LBA range: start 0x0 length 0x4000 00:24:30.685 NVMe0n1 : 10.01 8688.87 33.94 6802.59 0.00 8246.16 569.72 3019898.88 00:24:30.685 =================================================================================================================== 00:24:30.685 Total : 8688.87 33.94 6802.59 0.00 8246.16 0.00 3019898.88 00:24:30.685 0 00:24:30.685 19:25:07 -- host/timeout.sh@105 -- # killprocess 88195 00:24:30.685 19:25:07 -- common/autotest_common.sh@924 -- # '[' -z 88195 ']' 00:24:30.685 19:25:07 -- common/autotest_common.sh@928 -- # kill -0 88195 00:24:30.685 19:25:07 -- common/autotest_common.sh@929 -- # uname 00:24:30.685 19:25:07 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:24:30.685 19:25:07 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 88195 00:24:30.685 killing process with pid 88195 00:24:30.685 Received shutdown signal, test time was about 10.000000 seconds 00:24:30.685 00:24:30.685 Latency(us) 00:24:30.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.685 =================================================================================================================== 00:24:30.685 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:30.685 19:25:07 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:24:30.685 19:25:07 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:24:30.685 19:25:07 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 88195' 00:24:30.685 19:25:07 -- common/autotest_common.sh@943 -- # kill 88195 00:24:30.685 19:25:07 -- common/autotest_common.sh@948 -- # wait 88195 00:24:30.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:30.685 19:25:07 -- host/timeout.sh@110 -- # bdevperf_pid=88482 00:24:30.685 19:25:07 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:24:30.685 19:25:07 -- host/timeout.sh@112 -- # waitforlisten 88482 /var/tmp/bdevperf.sock 00:24:30.685 19:25:07 -- common/autotest_common.sh@817 -- # '[' -z 88482 ']' 00:24:30.685 19:25:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:30.685 19:25:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:30.685 19:25:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:30.685 19:25:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:30.685 19:25:07 -- common/autotest_common.sh@10 -- # set +x 00:24:30.685 [2024-02-14 19:25:07.774417] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:24:30.685 [2024-02-14 19:25:07.775501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88482 ] 00:24:30.685 [2024-02-14 19:25:07.920383] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.686 [2024-02-14 19:25:08.013557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.251 19:25:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:31.251 19:25:08 -- common/autotest_common.sh@850 -- # return 0 00:24:31.251 19:25:08 -- host/timeout.sh@116 -- # dtrace_pid=88510 00:24:31.251 19:25:08 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88482 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:24:31.252 19:25:08 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:24:31.510 19:25:08 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:31.769 NVMe0n1 00:24:32.028 19:25:09 -- host/timeout.sh@124 -- # rpc_pid=88562 00:24:32.028 19:25:09 -- host/timeout.sh@125 -- # sleep 1 00:24:32.028 19:25:09 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:32.028 Running I/O for 10 seconds... 00:24:32.964 19:25:10 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:32.964 [2024-02-14 19:25:10.374424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374577] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374610] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374668] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374732] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374757] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374775] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374865] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.964 [2024-02-14 19:25:10.374903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.374910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.374918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.374925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.374932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.374939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.374946] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.374953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.374960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.374967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.374975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.374982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.374990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.374998] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375099] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375193] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375276] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375309] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375338] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375365] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375395] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.375571] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.376161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.376419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.376565] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.376685] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:32.965 [2024-02-14 19:25:10.376796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:33.227 [2024-02-14 19:25:10.376856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:33.227 [2024-02-14 19:25:10.376987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:33.227 [2024-02-14 19:25:10.377000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:33.227 [2024-02-14 19:25:10.377008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:33.227 [2024-02-14 19:25:10.377016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:33.227 [2024-02-14 19:25:10.377039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:33.227 [2024-02-14 19:25:10.377046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229cb20 is same with the state(5) to be set 00:24:33.227 [2024-02-14 19:25:10.377377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.227 [2024-02-14 19:25:10.377405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.227 [2024-02-14 19:25:10.377426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.227 [2024-02-14 19:25:10.377437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.227 [2024-02-14 19:25:10.377448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.227 [2024-02-14 19:25:10.377457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.227 [2024-02-14 19:25:10.377467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.227 [2024-02-14 19:25:10.377476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.227 [2024-02-14 19:25:10.377515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.227 [2024-02-14 19:25:10.377527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.227 [2024-02-14 19:25:10.377538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.227 [2024-02-14 19:25:10.377548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.227 [2024-02-14 19:25:10.377559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.227 [2024-02-14 19:25:10.377569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.227 [2024-02-14 19:25:10.377580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:29104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.227 [2024-02-14 19:25:10.377589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.227 [2024-02-14 19:25:10.377600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.227 [2024-02-14 19:25:10.377609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.227 [2024-02-14 19:25:10.377620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:27120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.227 [2024-02-14 19:25:10.377629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.227 [2024-02-14 19:25:10.377658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.227 [2024-02-14 19:25:10.377668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.227 [2024-02-14 19:25:10.377679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.227 [2024-02-14 19:25:10.377688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.227 [2024-02-14 19:25:10.377699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.227 [2024-02-14 19:25:10.377709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.227 [2024-02-14 19:25:10.377720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:123504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.227 [2024-02-14 19:25:10.377729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.227 [2024-02-14 19:25:10.377740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.227 [2024-02-14 19:25:10.377750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.227 [2024-02-14 19:25:10.377762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.227 [2024-02-14 19:25:10.377771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.227 [2024-02-14 19:25:10.377782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.227 [2024-02-14 19:25:10.377792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.227 [2024-02-14 19:25:10.377804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:43288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.227 [2024-02-14 19:25:10.377814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.227 [2024-02-14 19:25:10.377825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.227 [2024-02-14 19:25:10.377834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.227 [2024-02-14 19:25:10.377845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.227 [2024-02-14 19:25:10.377855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.227 [2024-02-14 19:25:10.377866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.227 [2024-02-14 19:25:10.377876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.227 [2024-02-14 19:25:10.377902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.227 [2024-02-14 19:25:10.377911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.227 [2024-02-14 19:25:10.377923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.227 [2024-02-14 19:25:10.377933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.227 [2024-02-14 19:25:10.377944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.227 [2024-02-14 19:25:10.377953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.227 [2024-02-14 19:25:10.377964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:54472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.377973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.377985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.377994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.378005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:54784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.378014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.378025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.378034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.378044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.378054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.378065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:35136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.378074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.378085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.378095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.378106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.378115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.378126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:41808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.378136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.378147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.378172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.378198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.378207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.378218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:69488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.378228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.378239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.378248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.378260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:69280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.378269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.378280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.378289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.378300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.378309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.378320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.378329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.378340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.378349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.378360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.378369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.378380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.378389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.378416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.378425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.378436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:55664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.378461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.378471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:54688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.378480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.378491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:125152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.378501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.378513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.378522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.378534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:58704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.378543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.378763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.378783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.378810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.378820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.378832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.378841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.378852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:43736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.379148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.379171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:118152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.379181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.379192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.379202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.379214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.379224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.379235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.379245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.228 [2024-02-14 19:25:10.379256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.228 [2024-02-14 19:25:10.379266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:57280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:29824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:71256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:55752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:43752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:86208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:34904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:58456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.379983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.379994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:28728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.380003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.380013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.380027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.380043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.380052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.380062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.380072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.380082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:68144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.380106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.380116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.380125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.229 [2024-02-14 19:25:10.380135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.229 [2024-02-14 19:25:10.380144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.380154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.230 [2024-02-14 19:25:10.380164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.380174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.230 [2024-02-14 19:25:10.380183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.380194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:88096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.230 [2024-02-14 19:25:10.380202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.380213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.230 [2024-02-14 19:25:10.380222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.380232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.230 [2024-02-14 19:25:10.380241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.380251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.230 [2024-02-14 19:25:10.380260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.380270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:53312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.230 [2024-02-14 19:25:10.380278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.380289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.230 [2024-02-14 19:25:10.380298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.380308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.230 [2024-02-14 19:25:10.380316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.380327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.230 [2024-02-14 19:25:10.380335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.380346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:56088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.230 [2024-02-14 19:25:10.380358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.380373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.230 [2024-02-14 19:25:10.380382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.380393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.230 [2024-02-14 19:25:10.380402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.380413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.230 [2024-02-14 19:25:10.380437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.380447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.230 [2024-02-14 19:25:10.380456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.380467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:29312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.230 [2024-02-14 19:25:10.380491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.380517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.230 [2024-02-14 19:25:10.380527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.380537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.230 [2024-02-14 19:25:10.380546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.380557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:119568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.230 [2024-02-14 19:25:10.380575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.380586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.230 [2024-02-14 19:25:10.380595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.380606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.230 [2024-02-14 19:25:10.380615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.380626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.230 [2024-02-14 19:25:10.380635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.380645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.230 [2024-02-14 19:25:10.380654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.380665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.230 [2024-02-14 19:25:10.380689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.380700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:33552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.230 [2024-02-14 19:25:10.380709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.380720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.230 [2024-02-14 19:25:10.380729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.380740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:33608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.230 [2024-02-14 19:25:10.381285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.381302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.230 [2024-02-14 19:25:10.381313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.381323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:37200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.230 [2024-02-14 19:25:10.381332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.381343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.230 [2024-02-14 19:25:10.381352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.381362] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f2400 is same with the state(5) to be set 00:24:33.230 [2024-02-14 19:25:10.381374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.230 [2024-02-14 19:25:10.381381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.230 [2024-02-14 19:25:10.381389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87544 len:8 PRP1 0x0 PRP2 0x0 00:24:33.230 [2024-02-14 19:25:10.381397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.230 [2024-02-14 19:25:10.381553] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15f2400 was disconnected and freed. reset controller. 00:24:33.230 [2024-02-14 19:25:10.382078] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.230 [2024-02-14 19:25:10.382190] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ab3e0 (9): Bad file descriptor 00:24:33.230 [2024-02-14 19:25:10.382300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.230 [2024-02-14 19:25:10.382348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.230 [2024-02-14 19:25:10.382365] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ab3e0 with addr=10.0.0.2, port=4420 00:24:33.230 [2024-02-14 19:25:10.382375] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ab3e0 is same with the state(5) to be set 00:24:33.230 [2024-02-14 19:25:10.382393] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ab3e0 (9): Bad file descriptor 00:24:33.231 [2024-02-14 19:25:10.382409] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.231 [2024-02-14 19:25:10.382419] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:33.231 [2024-02-14 19:25:10.382429] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.231 [2024-02-14 19:25:10.382449] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:33.231 [2024-02-14 19:25:10.382459] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:33.231 19:25:10 -- host/timeout.sh@128 -- # wait 88562 00:24:35.134 [2024-02-14 19:25:12.382573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.134 [2024-02-14 19:25:12.382661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.134 [2024-02-14 19:25:12.382680] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ab3e0 with addr=10.0.0.2, port=4420 00:24:35.134 [2024-02-14 19:25:12.382691] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ab3e0 is same with the state(5) to be set 00:24:35.134 [2024-02-14 19:25:12.382712] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ab3e0 (9): Bad file descriptor 00:24:35.134 [2024-02-14 19:25:12.382728] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.134 [2024-02-14 19:25:12.382737] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.134 [2024-02-14 19:25:12.382747] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.134 [2024-02-14 19:25:12.382767] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.134 [2024-02-14 19:25:12.382778] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.036 [2024-02-14 19:25:14.382868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.036 [2024-02-14 19:25:14.382955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.036 [2024-02-14 19:25:14.382974] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ab3e0 with addr=10.0.0.2, port=4420 00:24:37.036 [2024-02-14 19:25:14.382985] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ab3e0 is same with the state(5) to be set 00:24:37.036 [2024-02-14 19:25:14.383013] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ab3e0 (9): Bad file descriptor 00:24:37.036 [2024-02-14 19:25:14.383047] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.036 [2024-02-14 19:25:14.383056] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.036 [2024-02-14 19:25:14.383065] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.036 [2024-02-14 19:25:14.383085] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.036 [2024-02-14 19:25:14.383095] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.570 [2024-02-14 19:25:16.383140] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.570 00:24:39.570 Latency(us) 00:24:39.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.570 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:24:39.570 NVMe0n1 : 7.12 3326.08 12.99 53.96 0.00 37845.99 231.80 6039797.76 00:24:39.570 =================================================================================================================== 00:24:39.570 Total : 3326.08 12.99 53.96 0.00 37845.99 231.80 6039797.76 00:24:39.570 0 00:24:39.570 19:25:16 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:39.570 Attaching 5 probes... 00:24:39.570 1172.526411: reset bdev controller NVMe0 00:24:39.570 1172.692896: reconnect bdev controller NVMe0 00:24:39.570 3172.931669: reconnect delay bdev controller NVMe0 00:24:39.570 3172.947549: reconnect bdev controller NVMe0 00:24:39.570 5173.249313: reconnect delay bdev controller NVMe0 00:24:39.570 5173.265086: reconnect bdev controller NVMe0 00:24:39.570 7173.568096: reconnect delay bdev controller NVMe0 00:24:39.570 7173.583087: reconnect bdev controller NVMe0 00:24:39.570 19:25:16 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:24:39.570 19:25:16 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:24:39.570 19:25:16 -- host/timeout.sh@136 -- # kill 88510 00:24:39.570 19:25:16 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:39.570 19:25:16 -- host/timeout.sh@139 -- # killprocess 88482 00:24:39.570 19:25:16 -- common/autotest_common.sh@924 -- # '[' -z 88482 ']' 00:24:39.570 19:25:16 -- common/autotest_common.sh@928 -- # kill -0 88482 00:24:39.570 19:25:16 -- common/autotest_common.sh@929 -- # uname 00:24:39.570 19:25:16 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:24:39.570 19:25:16 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 88482 00:24:39.571 19:25:16 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:24:39.571 19:25:16 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:24:39.571 19:25:16 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 88482' 00:24:39.571 killing process with pid 88482 00:24:39.571 Received shutdown signal, test time was about 7.172966 seconds 00:24:39.571 00:24:39.571 Latency(us) 00:24:39.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.571 =================================================================================================================== 00:24:39.571 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:39.571 19:25:16 -- common/autotest_common.sh@943 -- # kill 88482 00:24:39.571 19:25:16 -- common/autotest_common.sh@948 -- # wait 88482 00:24:39.571 19:25:16 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:39.571 19:25:16 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:24:39.571 19:25:16 -- host/timeout.sh@145 -- # nvmftestfini 00:24:39.571 19:25:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:39.571 19:25:16 -- nvmf/common.sh@116 -- # sync 00:24:39.571 19:25:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:39.571 19:25:16 -- nvmf/common.sh@119 -- # set +e 00:24:39.571 19:25:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:39.571 19:25:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:39.571 rmmod nvme_tcp 00:24:39.571 rmmod nvme_fabrics 00:24:39.830 rmmod nvme_keyring 00:24:39.830 19:25:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:39.830 19:25:17 -- nvmf/common.sh@123 -- # set -e 00:24:39.830 19:25:17 -- nvmf/common.sh@124 -- # return 0 00:24:39.830 19:25:17 -- nvmf/common.sh@477 -- # '[' -n 87904 ']' 00:24:39.830 19:25:17 -- nvmf/common.sh@478 -- # killprocess 87904 00:24:39.830 19:25:17 -- common/autotest_common.sh@924 -- # '[' -z 87904 ']' 00:24:39.830 19:25:17 -- common/autotest_common.sh@928 -- # kill -0 87904 00:24:39.830 19:25:17 -- common/autotest_common.sh@929 -- # uname 00:24:39.830 19:25:17 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:24:39.830 19:25:17 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 87904 00:24:39.830 killing process with pid 87904 00:24:39.830 19:25:17 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:24:39.830 19:25:17 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:24:39.830 19:25:17 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 87904' 00:24:39.830 19:25:17 -- common/autotest_common.sh@943 -- # kill 87904 00:24:39.830 19:25:17 -- common/autotest_common.sh@948 -- # wait 87904 00:24:40.089 19:25:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:40.089 19:25:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:40.089 19:25:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:40.089 19:25:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:40.089 19:25:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:40.089 19:25:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.089 19:25:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:40.089 19:25:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.089 19:25:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:40.089 ************************************ 00:24:40.089 END TEST nvmf_timeout 00:24:40.089 ************************************ 00:24:40.089 00:24:40.089 real 0m45.058s 00:24:40.089 user 2m10.855s 00:24:40.089 sys 0m5.488s 00:24:40.089 19:25:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:40.089 19:25:17 -- common/autotest_common.sh@10 -- # set +x 00:24:40.089 19:25:17 -- nvmf/nvmf.sh@119 -- # [[ virt == phy ]] 00:24:40.089 19:25:17 -- nvmf/nvmf.sh@126 -- # timing_exit host 00:24:40.089 19:25:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:40.089 19:25:17 -- common/autotest_common.sh@10 -- # set +x 00:24:40.089 19:25:17 -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:24:40.089 ************************************ 00:24:40.089 END TEST nvmf_tcp 00:24:40.089 ************************************ 00:24:40.089 00:24:40.089 real 18m11.647s 00:24:40.089 user 57m17.690s 00:24:40.089 sys 3m51.946s 00:24:40.089 19:25:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:40.089 19:25:17 -- common/autotest_common.sh@10 -- # set +x 00:24:40.089 19:25:17 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:24:40.089 19:25:17 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:40.089 19:25:17 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:24:40.089 19:25:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:24:40.089 19:25:17 -- common/autotest_common.sh@10 -- # set +x 00:24:40.089 ************************************ 00:24:40.089 START TEST spdkcli_nvmf_tcp 00:24:40.089 ************************************ 00:24:40.089 19:25:17 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:40.348 * Looking for test storage... 00:24:40.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:24:40.348 19:25:17 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:24:40.348 19:25:17 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:24:40.348 19:25:17 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:24:40.348 19:25:17 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:40.348 19:25:17 -- nvmf/common.sh@7 -- # uname -s 00:24:40.348 19:25:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:40.348 19:25:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:40.348 19:25:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:40.348 19:25:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:40.348 19:25:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:40.348 19:25:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:40.348 19:25:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:40.348 19:25:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:40.348 19:25:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:40.348 19:25:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:40.348 19:25:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:24:40.348 19:25:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:24:40.348 19:25:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:40.348 19:25:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:40.348 19:25:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:40.348 19:25:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:40.348 19:25:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.348 19:25:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.348 19:25:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.348 19:25:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.348 19:25:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.348 19:25:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.348 19:25:17 -- paths/export.sh@5 -- # export PATH 00:24:40.348 19:25:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.348 19:25:17 -- nvmf/common.sh@46 -- # : 0 00:24:40.348 19:25:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:40.348 19:25:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:40.348 19:25:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:40.348 19:25:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:40.348 19:25:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:40.348 19:25:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:40.348 19:25:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:40.348 19:25:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:40.348 19:25:17 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:40.348 19:25:17 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:40.348 19:25:17 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:40.348 19:25:17 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:40.348 19:25:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:40.348 19:25:17 -- common/autotest_common.sh@10 -- # set +x 00:24:40.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.348 19:25:17 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:40.348 19:25:17 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=88762 00:24:40.348 19:25:17 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:40.348 19:25:17 -- spdkcli/common.sh@34 -- # waitforlisten 88762 00:24:40.348 19:25:17 -- common/autotest_common.sh@817 -- # '[' -z 88762 ']' 00:24:40.348 19:25:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.348 19:25:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:40.348 19:25:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.348 19:25:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:40.348 19:25:17 -- common/autotest_common.sh@10 -- # set +x 00:24:40.348 [2024-02-14 19:25:17.660781] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:24:40.348 [2024-02-14 19:25:17.661026] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88762 ] 00:24:40.607 [2024-02-14 19:25:17.800398] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:40.607 [2024-02-14 19:25:17.889492] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:40.607 [2024-02-14 19:25:17.889781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.607 [2024-02-14 19:25:17.889793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.543 19:25:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:41.543 19:25:18 -- common/autotest_common.sh@850 -- # return 0 00:24:41.543 19:25:18 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:41.543 19:25:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:41.543 19:25:18 -- common/autotest_common.sh@10 -- # set +x 00:24:41.543 19:25:18 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:41.543 19:25:18 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:24:41.543 19:25:18 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:41.543 19:25:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:41.543 19:25:18 -- common/autotest_common.sh@10 -- # set +x 00:24:41.543 19:25:18 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:41.543 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:41.543 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:41.543 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:41.543 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:41.543 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:41.543 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:41.543 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:41.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:41.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:41.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:41.543 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:41.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:41.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:41.543 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:41.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:41.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:41.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:41.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:41.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:41.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:41.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:41.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:41.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:24:41.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:41.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:41.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:41.543 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:41.543 ' 00:24:41.803 [2024-02-14 19:25:19.144912] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:44.336 [2024-02-14 19:25:21.373993] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.273 [2024-02-14 19:25:22.647898] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:24:47.805 [2024-02-14 19:25:24.995220] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:24:49.707 [2024-02-14 19:25:27.014314] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:24:51.614 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:24:51.614 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:24:51.614 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:24:51.614 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:24:51.614 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:24:51.614 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:24:51.614 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:24:51.614 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:51.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:24:51.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:24:51.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:51.614 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:51.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:24:51.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:51.614 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:51.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:24:51.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:51.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:24:51.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:51.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:51.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:24:51.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:24:51.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:24:51.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:24:51.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:51.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:24:51.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:24:51.614 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:24:51.614 19:25:28 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:24:51.614 19:25:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:51.614 19:25:28 -- common/autotest_common.sh@10 -- # set +x 00:24:51.614 19:25:28 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:24:51.614 19:25:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:51.614 19:25:28 -- common/autotest_common.sh@10 -- # set +x 00:24:51.614 19:25:28 -- spdkcli/nvmf.sh@69 -- # check_match 00:24:51.614 19:25:28 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:24:51.873 19:25:29 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:24:51.873 19:25:29 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:24:51.873 19:25:29 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:24:51.873 19:25:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:51.873 19:25:29 -- common/autotest_common.sh@10 -- # set +x 00:24:51.873 19:25:29 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:24:51.873 19:25:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:51.873 19:25:29 -- common/autotest_common.sh@10 -- # set +x 00:24:51.873 19:25:29 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:24:51.873 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:24:51.873 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:51.873 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:24:51.873 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:24:51.873 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:24:51.873 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:24:51.873 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:51.873 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:24:51.873 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:24:51.873 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:24:51.873 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:24:51.873 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:24:51.873 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:24:51.873 ' 00:24:57.140 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:24:57.140 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:24:57.140 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:57.140 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:24:57.140 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:24:57.141 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:24:57.141 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:24:57.141 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:57.141 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:24:57.141 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:24:57.141 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:24:57.141 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:24:57.141 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:24:57.141 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:24:57.141 19:25:34 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:24:57.141 19:25:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:57.141 19:25:34 -- common/autotest_common.sh@10 -- # set +x 00:24:57.400 19:25:34 -- spdkcli/nvmf.sh@90 -- # killprocess 88762 00:24:57.400 19:25:34 -- common/autotest_common.sh@924 -- # '[' -z 88762 ']' 00:24:57.400 19:25:34 -- common/autotest_common.sh@928 -- # kill -0 88762 00:24:57.400 19:25:34 -- common/autotest_common.sh@929 -- # uname 00:24:57.400 19:25:34 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:24:57.400 19:25:34 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 88762 00:24:57.400 19:25:34 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:24:57.400 19:25:34 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:24:57.400 19:25:34 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 88762' 00:24:57.400 killing process with pid 88762 00:24:57.400 19:25:34 -- common/autotest_common.sh@943 -- # kill 88762 00:24:57.400 [2024-02-14 19:25:34.588855] app.c: 881:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:57.400 19:25:34 -- common/autotest_common.sh@948 -- # wait 88762 00:24:57.400 19:25:34 -- spdkcli/nvmf.sh@1 -- # cleanup 00:24:57.400 19:25:34 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:24:57.400 19:25:34 -- spdkcli/common.sh@13 -- # '[' -n 88762 ']' 00:24:57.400 19:25:34 -- spdkcli/common.sh@14 -- # killprocess 88762 00:24:57.400 19:25:34 -- common/autotest_common.sh@924 -- # '[' -z 88762 ']' 00:24:57.400 19:25:34 -- common/autotest_common.sh@928 -- # kill -0 88762 00:24:57.400 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 928: kill: (88762) - No such process 00:24:57.400 Process with pid 88762 is not found 00:24:57.400 19:25:34 -- common/autotest_common.sh@951 -- # echo 'Process with pid 88762 is not found' 00:24:57.400 19:25:34 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:24:57.400 19:25:34 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:24:57.400 19:25:34 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:24:57.659 ************************************ 00:24:57.659 END TEST spdkcli_nvmf_tcp 00:24:57.659 ************************************ 00:24:57.659 00:24:57.659 real 0m17.326s 00:24:57.659 user 0m37.104s 00:24:57.659 sys 0m0.913s 00:24:57.659 19:25:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:57.659 19:25:34 -- common/autotest_common.sh@10 -- # set +x 00:24:57.659 19:25:34 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:24:57.659 19:25:34 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:24:57.659 19:25:34 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:24:57.659 19:25:34 -- common/autotest_common.sh@10 -- # set +x 00:24:57.659 ************************************ 00:24:57.659 START TEST nvmf_identify_passthru 00:24:57.659 ************************************ 00:24:57.659 19:25:34 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:24:57.659 * Looking for test storage... 00:24:57.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:57.659 19:25:34 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:57.659 19:25:34 -- nvmf/common.sh@7 -- # uname -s 00:24:57.659 19:25:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.659 19:25:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.659 19:25:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.659 19:25:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.659 19:25:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.659 19:25:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.659 19:25:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.659 19:25:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.659 19:25:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.659 19:25:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.660 19:25:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:24:57.660 19:25:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:24:57.660 19:25:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.660 19:25:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.660 19:25:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:57.660 19:25:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:57.660 19:25:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.660 19:25:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.660 19:25:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.660 19:25:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.660 19:25:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.660 19:25:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.660 19:25:34 -- paths/export.sh@5 -- # export PATH 00:24:57.660 19:25:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.660 19:25:34 -- nvmf/common.sh@46 -- # : 0 00:24:57.660 19:25:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:57.660 19:25:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:57.660 19:25:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:57.660 19:25:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.660 19:25:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.660 19:25:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:57.660 19:25:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:57.660 19:25:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:57.660 19:25:34 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:57.660 19:25:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.660 19:25:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.660 19:25:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.660 19:25:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.660 19:25:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.660 19:25:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.660 19:25:34 -- paths/export.sh@5 -- # export PATH 00:24:57.660 19:25:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.660 19:25:34 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:24:57.660 19:25:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:57.660 19:25:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.660 19:25:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:57.660 19:25:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:57.660 19:25:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:57.660 19:25:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.660 19:25:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:57.660 19:25:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.660 19:25:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:57.660 19:25:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:57.660 19:25:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:57.660 19:25:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:57.660 19:25:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:57.660 19:25:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:57.660 19:25:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:57.660 19:25:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:57.660 19:25:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:57.660 19:25:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:57.660 19:25:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:57.660 19:25:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:57.660 19:25:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:57.660 19:25:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:57.660 19:25:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:57.660 19:25:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:57.660 19:25:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:57.660 19:25:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:57.660 19:25:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:57.660 19:25:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:57.660 Cannot find device "nvmf_tgt_br" 00:24:57.660 19:25:35 -- nvmf/common.sh@154 -- # true 00:24:57.660 19:25:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:57.660 Cannot find device "nvmf_tgt_br2" 00:24:57.660 19:25:35 -- nvmf/common.sh@155 -- # true 00:24:57.660 19:25:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:57.660 19:25:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:57.660 Cannot find device "nvmf_tgt_br" 00:24:57.660 19:25:35 -- nvmf/common.sh@157 -- # true 00:24:57.660 19:25:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:57.660 Cannot find device "nvmf_tgt_br2" 00:24:57.660 19:25:35 -- nvmf/common.sh@158 -- # true 00:24:57.660 19:25:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:57.919 19:25:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:57.919 19:25:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:57.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:57.919 19:25:35 -- nvmf/common.sh@161 -- # true 00:24:57.919 19:25:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:57.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:57.919 19:25:35 -- nvmf/common.sh@162 -- # true 00:24:57.919 19:25:35 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:57.919 19:25:35 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:57.919 19:25:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:57.919 19:25:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:57.919 19:25:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:57.919 19:25:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:57.919 19:25:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:57.919 19:25:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:57.919 19:25:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:57.919 19:25:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:57.919 19:25:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:57.919 19:25:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:57.919 19:25:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:57.919 19:25:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:57.919 19:25:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:57.919 19:25:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:57.920 19:25:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:57.920 19:25:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:57.920 19:25:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:57.920 19:25:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:57.920 19:25:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:57.920 19:25:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:57.920 19:25:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:57.920 19:25:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:57.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:57.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:24:57.920 00:24:57.920 --- 10.0.0.2 ping statistics --- 00:24:57.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.920 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:24:57.920 19:25:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:57.920 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:57.920 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:24:57.920 00:24:57.920 --- 10.0.0.3 ping statistics --- 00:24:57.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.920 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:24:57.920 19:25:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:57.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:57.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:24:57.920 00:24:57.920 --- 10.0.0.1 ping statistics --- 00:24:57.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.920 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:24:57.920 19:25:35 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:57.920 19:25:35 -- nvmf/common.sh@421 -- # return 0 00:24:57.920 19:25:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:57.920 19:25:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:57.920 19:25:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:57.920 19:25:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:57.920 19:25:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:57.920 19:25:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:57.920 19:25:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:57.920 19:25:35 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:24:57.920 19:25:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:57.920 19:25:35 -- common/autotest_common.sh@10 -- # set +x 00:24:57.920 19:25:35 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:24:57.920 19:25:35 -- common/autotest_common.sh@1507 -- # bdfs=() 00:24:57.920 19:25:35 -- common/autotest_common.sh@1507 -- # local bdfs 00:24:57.920 19:25:35 -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:24:57.920 19:25:35 -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:24:57.920 19:25:35 -- common/autotest_common.sh@1496 -- # bdfs=() 00:24:57.920 19:25:35 -- common/autotest_common.sh@1496 -- # local bdfs 00:24:57.920 19:25:35 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:24:57.920 19:25:35 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:24:57.920 19:25:35 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:24:58.179 19:25:35 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:24:58.179 19:25:35 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:24:58.179 19:25:35 -- common/autotest_common.sh@1510 -- # echo 0000:00:06.0 00:24:58.179 19:25:35 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:24:58.179 19:25:35 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:24:58.179 19:25:35 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:24:58.179 19:25:35 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:24:58.179 19:25:35 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:24:58.179 19:25:35 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:24:58.179 19:25:35 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:24:58.179 19:25:35 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:24:58.179 19:25:35 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:24:58.438 19:25:35 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:24:58.438 19:25:35 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:24:58.438 19:25:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:58.438 19:25:35 -- common/autotest_common.sh@10 -- # set +x 00:24:58.438 19:25:35 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:24:58.438 19:25:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:58.438 19:25:35 -- common/autotest_common.sh@10 -- # set +x 00:24:58.438 19:25:35 -- target/identify_passthru.sh@31 -- # nvmfpid=89254 00:24:58.438 19:25:35 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:58.438 19:25:35 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:58.438 19:25:35 -- target/identify_passthru.sh@35 -- # waitforlisten 89254 00:24:58.438 19:25:35 -- common/autotest_common.sh@817 -- # '[' -z 89254 ']' 00:24:58.438 19:25:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.438 19:25:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:58.438 19:25:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.438 19:25:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:58.438 19:25:35 -- common/autotest_common.sh@10 -- # set +x 00:24:58.438 [2024-02-14 19:25:35.837938] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:24:58.438 [2024-02-14 19:25:35.838033] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.698 [2024-02-14 19:25:35.976434] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:58.698 [2024-02-14 19:25:36.085963] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:58.698 [2024-02-14 19:25:36.086425] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.698 [2024-02-14 19:25:36.086610] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.698 [2024-02-14 19:25:36.086752] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.698 [2024-02-14 19:25:36.087190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.698 [2024-02-14 19:25:36.087273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:58.698 [2024-02-14 19:25:36.087411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:58.698 [2024-02-14 19:25:36.087416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.639 19:25:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:59.639 19:25:36 -- common/autotest_common.sh@850 -- # return 0 00:24:59.639 19:25:36 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:24:59.639 19:25:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.639 19:25:36 -- common/autotest_common.sh@10 -- # set +x 00:24:59.639 19:25:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.639 19:25:36 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:24:59.639 19:25:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.639 19:25:36 -- common/autotest_common.sh@10 -- # set +x 00:24:59.639 [2024-02-14 19:25:36.950361] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:24:59.639 19:25:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.639 19:25:36 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:59.639 19:25:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.639 19:25:36 -- common/autotest_common.sh@10 -- # set +x 00:24:59.639 [2024-02-14 19:25:36.964653] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.639 19:25:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.639 19:25:36 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:24:59.639 19:25:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:59.639 19:25:36 -- common/autotest_common.sh@10 -- # set +x 00:24:59.639 19:25:37 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:24:59.639 19:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.639 19:25:37 -- common/autotest_common.sh@10 -- # set +x 00:24:59.913 Nvme0n1 00:24:59.913 19:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.913 19:25:37 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:24:59.913 19:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.913 19:25:37 -- common/autotest_common.sh@10 -- # set +x 00:24:59.913 19:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.913 19:25:37 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:59.913 19:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.913 19:25:37 -- common/autotest_common.sh@10 -- # set +x 00:24:59.913 19:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.913 19:25:37 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:59.913 19:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.913 19:25:37 -- common/autotest_common.sh@10 -- # set +x 00:24:59.913 [2024-02-14 19:25:37.105363] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.913 19:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.913 19:25:37 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:24:59.913 19:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.913 19:25:37 -- common/autotest_common.sh@10 -- # set +x 00:24:59.913 [2024-02-14 19:25:37.113065] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:59.913 [ 00:24:59.913 { 00:24:59.913 "allow_any_host": true, 00:24:59.913 "hosts": [], 00:24:59.913 "listen_addresses": [], 00:24:59.913 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:59.913 "subtype": "Discovery" 00:24:59.913 }, 00:24:59.913 { 00:24:59.913 "allow_any_host": true, 00:24:59.913 "hosts": [], 00:24:59.913 "listen_addresses": [ 00:24:59.913 { 00:24:59.913 "adrfam": "IPv4", 00:24:59.913 "traddr": "10.0.0.2", 00:24:59.913 "transport": "TCP", 00:24:59.913 "trsvcid": "4420", 00:24:59.913 "trtype": "TCP" 00:24:59.913 } 00:24:59.913 ], 00:24:59.913 "max_cntlid": 65519, 00:24:59.913 "max_namespaces": 1, 00:24:59.913 "min_cntlid": 1, 00:24:59.913 "model_number": "SPDK bdev Controller", 00:24:59.913 "namespaces": [ 00:24:59.913 { 00:24:59.913 "bdev_name": "Nvme0n1", 00:24:59.913 "name": "Nvme0n1", 00:24:59.913 "nguid": "A487DD837B10446B8FF8B2C50D5657D2", 00:24:59.913 "nsid": 1, 00:24:59.913 "uuid": "a487dd83-7b10-446b-8ff8-b2c50d5657d2" 00:24:59.913 } 00:24:59.913 ], 00:24:59.913 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:59.913 "serial_number": "SPDK00000000000001", 00:24:59.913 "subtype": "NVMe" 00:24:59.913 } 00:24:59.913 ] 00:24:59.913 19:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.913 19:25:37 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:59.913 19:25:37 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:24:59.913 19:25:37 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:00.195 19:25:37 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:25:00.195 19:25:37 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:00.195 19:25:37 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:00.195 19:25:37 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:00.195 19:25:37 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:25:00.195 19:25:37 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:25:00.195 19:25:37 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:25:00.195 19:25:37 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:00.195 19:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.195 19:25:37 -- common/autotest_common.sh@10 -- # set +x 00:25:00.195 19:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.195 19:25:37 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:00.195 19:25:37 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:00.195 19:25:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:00.195 19:25:37 -- nvmf/common.sh@116 -- # sync 00:25:00.470 19:25:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:00.470 19:25:37 -- nvmf/common.sh@119 -- # set +e 00:25:00.470 19:25:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:00.470 19:25:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:00.470 rmmod nvme_tcp 00:25:00.470 rmmod nvme_fabrics 00:25:00.470 rmmod nvme_keyring 00:25:00.470 19:25:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:00.470 19:25:37 -- nvmf/common.sh@123 -- # set -e 00:25:00.470 19:25:37 -- nvmf/common.sh@124 -- # return 0 00:25:00.470 19:25:37 -- nvmf/common.sh@477 -- # '[' -n 89254 ']' 00:25:00.470 19:25:37 -- nvmf/common.sh@478 -- # killprocess 89254 00:25:00.470 19:25:37 -- common/autotest_common.sh@924 -- # '[' -z 89254 ']' 00:25:00.470 19:25:37 -- common/autotest_common.sh@928 -- # kill -0 89254 00:25:00.470 19:25:37 -- common/autotest_common.sh@929 -- # uname 00:25:00.470 19:25:37 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:25:00.470 19:25:37 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 89254 00:25:00.470 killing process with pid 89254 00:25:00.470 19:25:37 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:25:00.470 19:25:37 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:25:00.470 19:25:37 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 89254' 00:25:00.470 19:25:37 -- common/autotest_common.sh@943 -- # kill 89254 00:25:00.470 [2024-02-14 19:25:37.688735] app.c: 881:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:00.470 19:25:37 -- common/autotest_common.sh@948 -- # wait 89254 00:25:00.729 19:25:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:00.729 19:25:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:00.729 19:25:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:00.729 19:25:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:00.729 19:25:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:00.729 19:25:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.729 19:25:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:00.729 19:25:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.729 19:25:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:00.729 00:25:00.729 real 0m3.097s 00:25:00.729 user 0m7.654s 00:25:00.729 sys 0m0.859s 00:25:00.729 19:25:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:00.729 ************************************ 00:25:00.729 END TEST nvmf_identify_passthru 00:25:00.729 ************************************ 00:25:00.729 19:25:37 -- common/autotest_common.sh@10 -- # set +x 00:25:00.729 19:25:38 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:00.729 19:25:38 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:25:00.729 19:25:38 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:25:00.729 19:25:38 -- common/autotest_common.sh@10 -- # set +x 00:25:00.729 ************************************ 00:25:00.729 START TEST nvmf_dif 00:25:00.729 ************************************ 00:25:00.729 19:25:38 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:00.729 * Looking for test storage... 00:25:00.729 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:00.730 19:25:38 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:00.730 19:25:38 -- nvmf/common.sh@7 -- # uname -s 00:25:00.730 19:25:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:00.730 19:25:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:00.730 19:25:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:00.730 19:25:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:00.730 19:25:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:00.730 19:25:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:00.730 19:25:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:00.730 19:25:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:00.730 19:25:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:00.730 19:25:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:00.730 19:25:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:25:00.730 19:25:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:25:00.730 19:25:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:00.730 19:25:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:00.730 19:25:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:00.730 19:25:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:00.730 19:25:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:00.730 19:25:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:00.730 19:25:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:00.730 19:25:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.730 19:25:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.730 19:25:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.730 19:25:38 -- paths/export.sh@5 -- # export PATH 00:25:00.730 19:25:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.730 19:25:38 -- nvmf/common.sh@46 -- # : 0 00:25:00.730 19:25:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:00.730 19:25:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:00.730 19:25:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:00.730 19:25:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:00.730 19:25:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:00.730 19:25:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:00.730 19:25:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:00.730 19:25:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:00.730 19:25:38 -- target/dif.sh@15 -- # NULL_META=16 00:25:00.730 19:25:38 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:00.730 19:25:38 -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:00.730 19:25:38 -- target/dif.sh@15 -- # NULL_DIF=1 00:25:00.730 19:25:38 -- target/dif.sh@135 -- # nvmftestinit 00:25:00.730 19:25:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:00.730 19:25:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:00.730 19:25:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:00.730 19:25:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:00.730 19:25:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:00.730 19:25:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.730 19:25:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:00.730 19:25:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.730 19:25:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:00.730 19:25:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:00.730 19:25:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:00.730 19:25:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:00.730 19:25:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:00.730 19:25:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:00.730 19:25:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.730 19:25:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:00.730 19:25:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:00.730 19:25:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:00.730 19:25:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:00.730 19:25:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:00.730 19:25:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:00.730 19:25:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.730 19:25:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:00.730 19:25:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:00.730 19:25:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:00.730 19:25:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:00.730 19:25:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:00.989 19:25:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:00.989 Cannot find device "nvmf_tgt_br" 00:25:00.989 19:25:38 -- nvmf/common.sh@154 -- # true 00:25:00.989 19:25:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:00.989 Cannot find device "nvmf_tgt_br2" 00:25:00.989 19:25:38 -- nvmf/common.sh@155 -- # true 00:25:00.989 19:25:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:00.989 19:25:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:00.989 Cannot find device "nvmf_tgt_br" 00:25:00.989 19:25:38 -- nvmf/common.sh@157 -- # true 00:25:00.989 19:25:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:00.989 Cannot find device "nvmf_tgt_br2" 00:25:00.989 19:25:38 -- nvmf/common.sh@158 -- # true 00:25:00.989 19:25:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:00.989 19:25:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:00.989 19:25:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:00.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:00.989 19:25:38 -- nvmf/common.sh@161 -- # true 00:25:00.989 19:25:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:00.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:00.989 19:25:38 -- nvmf/common.sh@162 -- # true 00:25:00.989 19:25:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:00.989 19:25:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:00.989 19:25:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:00.989 19:25:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:00.989 19:25:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:00.989 19:25:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:00.989 19:25:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:00.989 19:25:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:00.989 19:25:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:00.989 19:25:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:00.989 19:25:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:00.989 19:25:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:00.989 19:25:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:00.989 19:25:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:00.989 19:25:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:00.989 19:25:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:00.989 19:25:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:00.989 19:25:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:00.989 19:25:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:01.248 19:25:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:01.248 19:25:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:01.248 19:25:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:01.248 19:25:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:01.248 19:25:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:01.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:01.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:25:01.248 00:25:01.249 --- 10.0.0.2 ping statistics --- 00:25:01.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.249 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:25:01.249 19:25:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:01.249 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:01.249 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:25:01.249 00:25:01.249 --- 10.0.0.3 ping statistics --- 00:25:01.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.249 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:25:01.249 19:25:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:01.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:01.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:25:01.249 00:25:01.249 --- 10.0.0.1 ping statistics --- 00:25:01.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.249 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:25:01.249 19:25:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:01.249 19:25:38 -- nvmf/common.sh@421 -- # return 0 00:25:01.249 19:25:38 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:25:01.249 19:25:38 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:01.508 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:01.508 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:01.508 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:01.508 19:25:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:01.508 19:25:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:01.508 19:25:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:01.508 19:25:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:01.508 19:25:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:01.508 19:25:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:01.508 19:25:38 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:01.508 19:25:38 -- target/dif.sh@137 -- # nvmfappstart 00:25:01.508 19:25:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:01.508 19:25:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:01.508 19:25:38 -- common/autotest_common.sh@10 -- # set +x 00:25:01.508 19:25:38 -- nvmf/common.sh@469 -- # nvmfpid=89607 00:25:01.508 19:25:38 -- nvmf/common.sh@470 -- # waitforlisten 89607 00:25:01.508 19:25:38 -- common/autotest_common.sh@817 -- # '[' -z 89607 ']' 00:25:01.508 19:25:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.508 19:25:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:01.508 19:25:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:01.508 19:25:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.508 19:25:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:01.508 19:25:38 -- common/autotest_common.sh@10 -- # set +x 00:25:01.508 [2024-02-14 19:25:38.923798] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:25:01.508 [2024-02-14 19:25:38.923891] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:01.768 [2024-02-14 19:25:39.062843] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.768 [2024-02-14 19:25:39.160615] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:01.768 [2024-02-14 19:25:39.160825] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:01.768 [2024-02-14 19:25:39.160844] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:01.768 [2024-02-14 19:25:39.160855] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:01.768 [2024-02-14 19:25:39.160897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.705 19:25:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:02.705 19:25:39 -- common/autotest_common.sh@850 -- # return 0 00:25:02.705 19:25:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:02.705 19:25:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:02.705 19:25:39 -- common/autotest_common.sh@10 -- # set +x 00:25:02.705 19:25:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:02.705 19:25:39 -- target/dif.sh@139 -- # create_transport 00:25:02.705 19:25:39 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:02.705 19:25:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.705 19:25:39 -- common/autotest_common.sh@10 -- # set +x 00:25:02.705 [2024-02-14 19:25:39.866079] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:02.705 19:25:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.705 19:25:39 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:02.705 19:25:39 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:25:02.705 19:25:39 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:25:02.705 19:25:39 -- common/autotest_common.sh@10 -- # set +x 00:25:02.705 ************************************ 00:25:02.705 START TEST fio_dif_1_default 00:25:02.705 ************************************ 00:25:02.705 19:25:39 -- common/autotest_common.sh@1102 -- # fio_dif_1 00:25:02.705 19:25:39 -- target/dif.sh@86 -- # create_subsystems 0 00:25:02.705 19:25:39 -- target/dif.sh@28 -- # local sub 00:25:02.705 19:25:39 -- target/dif.sh@30 -- # for sub in "$@" 00:25:02.705 19:25:39 -- target/dif.sh@31 -- # create_subsystem 0 00:25:02.705 19:25:39 -- target/dif.sh@18 -- # local sub_id=0 00:25:02.705 19:25:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:02.705 19:25:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.705 19:25:39 -- common/autotest_common.sh@10 -- # set +x 00:25:02.705 bdev_null0 00:25:02.705 19:25:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.705 19:25:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:02.705 19:25:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.705 19:25:39 -- common/autotest_common.sh@10 -- # set +x 00:25:02.705 19:25:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.705 19:25:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:02.705 19:25:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.705 19:25:39 -- common/autotest_common.sh@10 -- # set +x 00:25:02.705 19:25:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.705 19:25:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:02.705 19:25:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.705 19:25:39 -- common/autotest_common.sh@10 -- # set +x 00:25:02.705 [2024-02-14 19:25:39.910184] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.705 19:25:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.705 19:25:39 -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:02.705 19:25:39 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:02.705 19:25:39 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:02.705 19:25:39 -- nvmf/common.sh@520 -- # config=() 00:25:02.705 19:25:39 -- nvmf/common.sh@520 -- # local subsystem config 00:25:02.705 19:25:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:02.705 19:25:39 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:02.705 19:25:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:02.705 { 00:25:02.705 "params": { 00:25:02.705 "name": "Nvme$subsystem", 00:25:02.705 "trtype": "$TEST_TRANSPORT", 00:25:02.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:02.705 "adrfam": "ipv4", 00:25:02.705 "trsvcid": "$NVMF_PORT", 00:25:02.705 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:02.705 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:02.705 "hdgst": ${hdgst:-false}, 00:25:02.705 "ddgst": ${ddgst:-false} 00:25:02.705 }, 00:25:02.705 "method": "bdev_nvme_attach_controller" 00:25:02.705 } 00:25:02.705 EOF 00:25:02.705 )") 00:25:02.705 19:25:39 -- target/dif.sh@82 -- # gen_fio_conf 00:25:02.706 19:25:39 -- common/autotest_common.sh@1333 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:02.706 19:25:39 -- target/dif.sh@54 -- # local file 00:25:02.706 19:25:39 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:25:02.706 19:25:39 -- target/dif.sh@56 -- # cat 00:25:02.706 19:25:39 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:02.706 19:25:39 -- common/autotest_common.sh@1316 -- # local sanitizers 00:25:02.706 19:25:39 -- common/autotest_common.sh@1317 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:02.706 19:25:39 -- common/autotest_common.sh@1318 -- # shift 00:25:02.706 19:25:39 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:25:02.706 19:25:39 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:25:02.706 19:25:39 -- nvmf/common.sh@542 -- # cat 00:25:02.706 19:25:39 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:02.706 19:25:39 -- common/autotest_common.sh@1322 -- # grep libasan 00:25:02.706 19:25:39 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:02.706 19:25:39 -- target/dif.sh@72 -- # (( file <= files )) 00:25:02.706 19:25:39 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:25:02.706 19:25:39 -- nvmf/common.sh@544 -- # jq . 00:25:02.706 19:25:39 -- nvmf/common.sh@545 -- # IFS=, 00:25:02.706 19:25:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:02.706 "params": { 00:25:02.706 "name": "Nvme0", 00:25:02.706 "trtype": "tcp", 00:25:02.706 "traddr": "10.0.0.2", 00:25:02.706 "adrfam": "ipv4", 00:25:02.706 "trsvcid": "4420", 00:25:02.706 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:02.706 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:02.706 "hdgst": false, 00:25:02.706 "ddgst": false 00:25:02.706 }, 00:25:02.706 "method": "bdev_nvme_attach_controller" 00:25:02.706 }' 00:25:02.706 19:25:39 -- common/autotest_common.sh@1322 -- # asan_lib= 00:25:02.706 19:25:39 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:25:02.706 19:25:39 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:25:02.706 19:25:39 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:02.706 19:25:39 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:25:02.706 19:25:39 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:25:02.706 19:25:39 -- common/autotest_common.sh@1322 -- # asan_lib= 00:25:02.706 19:25:39 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:25:02.706 19:25:39 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:02.706 19:25:39 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:02.965 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:02.965 fio-3.35 00:25:02.965 Starting 1 thread 00:25:03.223 [2024-02-14 19:25:40.590302] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:03.223 [2024-02-14 19:25:40.591104] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:15.429 00:25:15.429 filename0: (groupid=0, jobs=1): err= 0: pid=89692: Wed Feb 14 19:25:50 2024 00:25:15.429 read: IOPS=3627, BW=14.2MiB/s (14.9MB/s)(142MiB/10025msec) 00:25:15.429 slat (nsec): min=5709, max=53954, avg=7094.45, stdev=2596.38 00:25:15.429 clat (usec): min=353, max=43254, avg=1080.76, stdev=5174.76 00:25:15.429 lat (usec): min=359, max=43263, avg=1087.86, stdev=5174.83 00:25:15.429 clat percentiles (usec): 00:25:15.429 | 1.00th=[ 379], 5.00th=[ 379], 10.00th=[ 383], 20.00th=[ 392], 00:25:15.429 | 30.00th=[ 396], 40.00th=[ 400], 50.00th=[ 404], 60.00th=[ 408], 00:25:15.429 | 70.00th=[ 416], 80.00th=[ 424], 90.00th=[ 445], 95.00th=[ 478], 00:25:15.429 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:25:15.429 | 99.99th=[43254] 00:25:15.429 bw ( KiB/s): min= 3776, max=27168, per=100.00%, avg=14545.60, stdev=6916.79, samples=20 00:25:15.429 iops : min= 944, max= 6792, avg=3636.40, stdev=1729.20, samples=20 00:25:15.429 lat (usec) : 500=96.01%, 750=2.33% 00:25:15.429 lat (msec) : 4=0.01%, 50=1.65% 00:25:15.429 cpu : usr=88.82%, sys=9.35%, ctx=30, majf=0, minf=9 00:25:15.429 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:15.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:15.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:15.429 issued rwts: total=36368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:15.429 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:15.429 00:25:15.429 Run status group 0 (all jobs): 00:25:15.429 READ: bw=14.2MiB/s (14.9MB/s), 14.2MiB/s-14.2MiB/s (14.9MB/s-14.9MB/s), io=142MiB (149MB), run=10025-10025msec 00:25:15.429 19:25:50 -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:15.429 19:25:50 -- target/dif.sh@43 -- # local sub 00:25:15.430 19:25:50 -- target/dif.sh@45 -- # for sub in "$@" 00:25:15.430 19:25:50 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:15.430 19:25:50 -- target/dif.sh@36 -- # local sub_id=0 00:25:15.430 19:25:50 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:15.430 19:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:15.430 19:25:50 -- common/autotest_common.sh@10 -- # set +x 00:25:15.430 19:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:15.430 19:25:50 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:15.430 19:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:15.430 19:25:50 -- common/autotest_common.sh@10 -- # set +x 00:25:15.430 19:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:15.430 00:25:15.430 real 0m11.097s 00:25:15.430 user 0m9.561s 00:25:15.430 sys 0m1.254s 00:25:15.430 ************************************ 00:25:15.430 END TEST fio_dif_1_default 00:25:15.430 ************************************ 00:25:15.430 19:25:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:15.430 19:25:50 -- common/autotest_common.sh@10 -- # set +x 00:25:15.430 19:25:51 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:15.430 19:25:51 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:25:15.430 19:25:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:25:15.430 19:25:51 -- common/autotest_common.sh@10 -- # set +x 00:25:15.430 ************************************ 00:25:15.430 START TEST fio_dif_1_multi_subsystems 00:25:15.430 ************************************ 00:25:15.430 19:25:51 -- common/autotest_common.sh@1102 -- # fio_dif_1_multi_subsystems 00:25:15.430 19:25:51 -- target/dif.sh@92 -- # local files=1 00:25:15.430 19:25:51 -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:15.430 19:25:51 -- target/dif.sh@28 -- # local sub 00:25:15.430 19:25:51 -- target/dif.sh@30 -- # for sub in "$@" 00:25:15.430 19:25:51 -- target/dif.sh@31 -- # create_subsystem 0 00:25:15.430 19:25:51 -- target/dif.sh@18 -- # local sub_id=0 00:25:15.430 19:25:51 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:15.430 19:25:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:15.430 19:25:51 -- common/autotest_common.sh@10 -- # set +x 00:25:15.430 bdev_null0 00:25:15.430 19:25:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:15.430 19:25:51 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:15.430 19:25:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:15.430 19:25:51 -- common/autotest_common.sh@10 -- # set +x 00:25:15.430 19:25:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:15.430 19:25:51 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:15.430 19:25:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:15.430 19:25:51 -- common/autotest_common.sh@10 -- # set +x 00:25:15.430 19:25:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:15.430 19:25:51 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:15.430 19:25:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:15.430 19:25:51 -- common/autotest_common.sh@10 -- # set +x 00:25:15.430 [2024-02-14 19:25:51.062664] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.430 19:25:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:15.430 19:25:51 -- target/dif.sh@30 -- # for sub in "$@" 00:25:15.430 19:25:51 -- target/dif.sh@31 -- # create_subsystem 1 00:25:15.430 19:25:51 -- target/dif.sh@18 -- # local sub_id=1 00:25:15.430 19:25:51 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:15.430 19:25:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:15.430 19:25:51 -- common/autotest_common.sh@10 -- # set +x 00:25:15.430 bdev_null1 00:25:15.430 19:25:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:15.430 19:25:51 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:15.430 19:25:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:15.430 19:25:51 -- common/autotest_common.sh@10 -- # set +x 00:25:15.430 19:25:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:15.430 19:25:51 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:15.430 19:25:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:15.430 19:25:51 -- common/autotest_common.sh@10 -- # set +x 00:25:15.430 19:25:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:15.430 19:25:51 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:15.430 19:25:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:15.430 19:25:51 -- common/autotest_common.sh@10 -- # set +x 00:25:15.430 19:25:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:15.430 19:25:51 -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:15.430 19:25:51 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:15.430 19:25:51 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:15.430 19:25:51 -- nvmf/common.sh@520 -- # config=() 00:25:15.430 19:25:51 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:15.430 19:25:51 -- nvmf/common.sh@520 -- # local subsystem config 00:25:15.430 19:25:51 -- common/autotest_common.sh@1333 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:15.430 19:25:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:15.430 19:25:51 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:25:15.430 19:25:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:15.430 { 00:25:15.430 "params": { 00:25:15.430 "name": "Nvme$subsystem", 00:25:15.430 "trtype": "$TEST_TRANSPORT", 00:25:15.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.430 "adrfam": "ipv4", 00:25:15.430 "trsvcid": "$NVMF_PORT", 00:25:15.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.430 "hdgst": ${hdgst:-false}, 00:25:15.430 "ddgst": ${ddgst:-false} 00:25:15.430 }, 00:25:15.430 "method": "bdev_nvme_attach_controller" 00:25:15.430 } 00:25:15.430 EOF 00:25:15.430 )") 00:25:15.430 19:25:51 -- target/dif.sh@82 -- # gen_fio_conf 00:25:15.430 19:25:51 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:15.430 19:25:51 -- target/dif.sh@54 -- # local file 00:25:15.430 19:25:51 -- common/autotest_common.sh@1316 -- # local sanitizers 00:25:15.430 19:25:51 -- target/dif.sh@56 -- # cat 00:25:15.430 19:25:51 -- common/autotest_common.sh@1317 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:15.430 19:25:51 -- common/autotest_common.sh@1318 -- # shift 00:25:15.430 19:25:51 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:25:15.430 19:25:51 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:25:15.430 19:25:51 -- nvmf/common.sh@542 -- # cat 00:25:15.430 19:25:51 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:15.430 19:25:51 -- common/autotest_common.sh@1322 -- # grep libasan 00:25:15.430 19:25:51 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:25:15.430 19:25:51 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:15.430 19:25:51 -- target/dif.sh@72 -- # (( file <= files )) 00:25:15.430 19:25:51 -- target/dif.sh@73 -- # cat 00:25:15.430 19:25:51 -- target/dif.sh@72 -- # (( file++ )) 00:25:15.430 19:25:51 -- target/dif.sh@72 -- # (( file <= files )) 00:25:15.430 19:25:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:15.430 19:25:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:15.430 { 00:25:15.430 "params": { 00:25:15.430 "name": "Nvme$subsystem", 00:25:15.430 "trtype": "$TEST_TRANSPORT", 00:25:15.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.430 "adrfam": "ipv4", 00:25:15.430 "trsvcid": "$NVMF_PORT", 00:25:15.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.430 "hdgst": ${hdgst:-false}, 00:25:15.430 "ddgst": ${ddgst:-false} 00:25:15.430 }, 00:25:15.430 "method": "bdev_nvme_attach_controller" 00:25:15.430 } 00:25:15.430 EOF 00:25:15.430 )") 00:25:15.430 19:25:51 -- nvmf/common.sh@542 -- # cat 00:25:15.430 19:25:51 -- nvmf/common.sh@544 -- # jq . 00:25:15.430 19:25:51 -- nvmf/common.sh@545 -- # IFS=, 00:25:15.430 19:25:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:15.430 "params": { 00:25:15.430 "name": "Nvme0", 00:25:15.430 "trtype": "tcp", 00:25:15.430 "traddr": "10.0.0.2", 00:25:15.430 "adrfam": "ipv4", 00:25:15.430 "trsvcid": "4420", 00:25:15.430 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:15.430 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:15.430 "hdgst": false, 00:25:15.430 "ddgst": false 00:25:15.430 }, 00:25:15.430 "method": "bdev_nvme_attach_controller" 00:25:15.430 },{ 00:25:15.430 "params": { 00:25:15.430 "name": "Nvme1", 00:25:15.430 "trtype": "tcp", 00:25:15.430 "traddr": "10.0.0.2", 00:25:15.430 "adrfam": "ipv4", 00:25:15.430 "trsvcid": "4420", 00:25:15.430 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:15.430 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:15.430 "hdgst": false, 00:25:15.430 "ddgst": false 00:25:15.430 }, 00:25:15.430 "method": "bdev_nvme_attach_controller" 00:25:15.430 }' 00:25:15.430 19:25:51 -- common/autotest_common.sh@1322 -- # asan_lib= 00:25:15.430 19:25:51 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:25:15.430 19:25:51 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:25:15.430 19:25:51 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:15.430 19:25:51 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:25:15.430 19:25:51 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:25:15.430 19:25:51 -- common/autotest_common.sh@1322 -- # asan_lib= 00:25:15.430 19:25:51 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:25:15.430 19:25:51 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:15.431 19:25:51 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:15.431 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:15.431 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:15.431 fio-3.35 00:25:15.431 Starting 2 threads 00:25:15.431 [2024-02-14 19:25:51.882604] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:15.431 [2024-02-14 19:25:51.882675] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:25.402 00:25:25.402 filename0: (groupid=0, jobs=1): err= 0: pid=89852: Wed Feb 14 19:26:02 2024 00:25:25.402 read: IOPS=199, BW=800KiB/s (819kB/s)(8016KiB/10024msec) 00:25:25.402 slat (nsec): min=6010, max=47096, avg=9682.39, stdev=5331.03 00:25:25.402 clat (usec): min=358, max=41919, avg=19977.30, stdev=20219.20 00:25:25.402 lat (usec): min=365, max=41931, avg=19986.98, stdev=20219.01 00:25:25.402 clat percentiles (usec): 00:25:25.402 | 1.00th=[ 371], 5.00th=[ 379], 10.00th=[ 388], 20.00th=[ 400], 00:25:25.402 | 30.00th=[ 420], 40.00th=[ 453], 50.00th=[ 717], 60.00th=[40633], 00:25:25.402 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:25.402 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:25:25.402 | 99.99th=[41681] 00:25:25.402 bw ( KiB/s): min= 512, max= 1120, per=46.96%, avg=799.65, stdev=147.89, samples=20 00:25:25.402 iops : min= 128, max= 280, avg=199.85, stdev=36.97, samples=20 00:25:25.402 lat (usec) : 500=46.56%, 750=3.99%, 1000=0.95% 00:25:25.402 lat (msec) : 2=0.20%, 50=48.30% 00:25:25.402 cpu : usr=97.08%, sys=2.17%, ctx=114, majf=0, minf=0 00:25:25.402 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:25.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.402 issued rwts: total=2004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.402 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:25.402 filename1: (groupid=0, jobs=1): err= 0: pid=89853: Wed Feb 14 19:26:02 2024 00:25:25.402 read: IOPS=225, BW=903KiB/s (925kB/s)(9040KiB/10012msec) 00:25:25.402 slat (nsec): min=5833, max=47676, avg=9246.06, stdev=5440.92 00:25:25.402 clat (usec): min=351, max=41976, avg=17691.76, stdev=20037.07 00:25:25.402 lat (usec): min=358, max=41987, avg=17701.01, stdev=20036.98 00:25:25.402 clat percentiles (usec): 00:25:25.402 | 1.00th=[ 359], 5.00th=[ 367], 10.00th=[ 375], 20.00th=[ 388], 00:25:25.402 | 30.00th=[ 396], 40.00th=[ 412], 50.00th=[ 441], 60.00th=[40633], 00:25:25.402 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:25.402 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:25:25.402 | 99.99th=[42206] 00:25:25.402 bw ( KiB/s): min= 576, max= 1341, per=53.48%, avg=910.74, stdev=220.96, samples=19 00:25:25.402 iops : min= 144, max= 335, avg=227.63, stdev=55.25, samples=19 00:25:25.402 lat (usec) : 500=54.51%, 750=1.77%, 1000=0.88% 00:25:25.402 lat (msec) : 2=0.18%, 50=42.65% 00:25:25.402 cpu : usr=97.21%, sys=2.42%, ctx=9, majf=0, minf=0 00:25:25.402 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:25.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.402 issued rwts: total=2260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.402 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:25.402 00:25:25.402 Run status group 0 (all jobs): 00:25:25.402 READ: bw=1702KiB/s (1742kB/s), 800KiB/s-903KiB/s (819kB/s-925kB/s), io=16.7MiB (17.5MB), run=10012-10024msec 00:25:25.402 19:26:02 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:25:25.402 19:26:02 -- target/dif.sh@43 -- # local sub 00:25:25.402 19:26:02 -- target/dif.sh@45 -- # for sub in "$@" 00:25:25.402 19:26:02 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:25.402 19:26:02 -- target/dif.sh@36 -- # local sub_id=0 00:25:25.402 19:26:02 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:25.402 19:26:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:25.402 19:26:02 -- common/autotest_common.sh@10 -- # set +x 00:25:25.402 19:26:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:25.402 19:26:02 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:25.402 19:26:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:25.402 19:26:02 -- common/autotest_common.sh@10 -- # set +x 00:25:25.402 19:26:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:25.402 19:26:02 -- target/dif.sh@45 -- # for sub in "$@" 00:25:25.402 19:26:02 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:25.402 19:26:02 -- target/dif.sh@36 -- # local sub_id=1 00:25:25.402 19:26:02 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:25.402 19:26:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:25.402 19:26:02 -- common/autotest_common.sh@10 -- # set +x 00:25:25.402 19:26:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:25.402 19:26:02 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:25.402 19:26:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:25.403 19:26:02 -- common/autotest_common.sh@10 -- # set +x 00:25:25.403 19:26:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:25.403 00:25:25.403 real 0m11.316s 00:25:25.403 user 0m20.365s 00:25:25.403 sys 0m0.776s 00:25:25.403 ************************************ 00:25:25.403 END TEST fio_dif_1_multi_subsystems 00:25:25.403 ************************************ 00:25:25.403 19:26:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:25.403 19:26:02 -- common/autotest_common.sh@10 -- # set +x 00:25:25.403 19:26:02 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:25:25.403 19:26:02 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:25:25.403 19:26:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:25:25.403 19:26:02 -- common/autotest_common.sh@10 -- # set +x 00:25:25.403 ************************************ 00:25:25.403 START TEST fio_dif_rand_params 00:25:25.403 ************************************ 00:25:25.403 19:26:02 -- common/autotest_common.sh@1102 -- # fio_dif_rand_params 00:25:25.403 19:26:02 -- target/dif.sh@100 -- # local NULL_DIF 00:25:25.403 19:26:02 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:25:25.403 19:26:02 -- target/dif.sh@103 -- # NULL_DIF=3 00:25:25.403 19:26:02 -- target/dif.sh@103 -- # bs=128k 00:25:25.403 19:26:02 -- target/dif.sh@103 -- # numjobs=3 00:25:25.403 19:26:02 -- target/dif.sh@103 -- # iodepth=3 00:25:25.403 19:26:02 -- target/dif.sh@103 -- # runtime=5 00:25:25.403 19:26:02 -- target/dif.sh@105 -- # create_subsystems 0 00:25:25.403 19:26:02 -- target/dif.sh@28 -- # local sub 00:25:25.403 19:26:02 -- target/dif.sh@30 -- # for sub in "$@" 00:25:25.403 19:26:02 -- target/dif.sh@31 -- # create_subsystem 0 00:25:25.403 19:26:02 -- target/dif.sh@18 -- # local sub_id=0 00:25:25.403 19:26:02 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:25.403 19:26:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:25.403 19:26:02 -- common/autotest_common.sh@10 -- # set +x 00:25:25.403 bdev_null0 00:25:25.403 19:26:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:25.403 19:26:02 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:25.403 19:26:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:25.403 19:26:02 -- common/autotest_common.sh@10 -- # set +x 00:25:25.403 19:26:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:25.403 19:26:02 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:25.403 19:26:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:25.403 19:26:02 -- common/autotest_common.sh@10 -- # set +x 00:25:25.403 19:26:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:25.403 19:26:02 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:25.403 19:26:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:25.403 19:26:02 -- common/autotest_common.sh@10 -- # set +x 00:25:25.403 [2024-02-14 19:26:02.433983] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.403 19:26:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:25.403 19:26:02 -- target/dif.sh@106 -- # fio /dev/fd/62 00:25:25.403 19:26:02 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:25:25.403 19:26:02 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:25.403 19:26:02 -- nvmf/common.sh@520 -- # config=() 00:25:25.403 19:26:02 -- nvmf/common.sh@520 -- # local subsystem config 00:25:25.403 19:26:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:25.403 19:26:02 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:25.403 19:26:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:25.403 { 00:25:25.403 "params": { 00:25:25.403 "name": "Nvme$subsystem", 00:25:25.403 "trtype": "$TEST_TRANSPORT", 00:25:25.403 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:25.403 "adrfam": "ipv4", 00:25:25.403 "trsvcid": "$NVMF_PORT", 00:25:25.403 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:25.403 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:25.403 "hdgst": ${hdgst:-false}, 00:25:25.403 "ddgst": ${ddgst:-false} 00:25:25.403 }, 00:25:25.403 "method": "bdev_nvme_attach_controller" 00:25:25.403 } 00:25:25.403 EOF 00:25:25.403 )") 00:25:25.403 19:26:02 -- target/dif.sh@82 -- # gen_fio_conf 00:25:25.403 19:26:02 -- common/autotest_common.sh@1333 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:25.403 19:26:02 -- target/dif.sh@54 -- # local file 00:25:25.403 19:26:02 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:25:25.403 19:26:02 -- target/dif.sh@56 -- # cat 00:25:25.403 19:26:02 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:25.403 19:26:02 -- common/autotest_common.sh@1316 -- # local sanitizers 00:25:25.403 19:26:02 -- common/autotest_common.sh@1317 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:25.403 19:26:02 -- common/autotest_common.sh@1318 -- # shift 00:25:25.403 19:26:02 -- nvmf/common.sh@542 -- # cat 00:25:25.403 19:26:02 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:25:25.403 19:26:02 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:25:25.403 19:26:02 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:25.403 19:26:02 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:25.403 19:26:02 -- common/autotest_common.sh@1322 -- # grep libasan 00:25:25.403 19:26:02 -- target/dif.sh@72 -- # (( file <= files )) 00:25:25.403 19:26:02 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:25:25.403 19:26:02 -- nvmf/common.sh@544 -- # jq . 00:25:25.403 19:26:02 -- nvmf/common.sh@545 -- # IFS=, 00:25:25.403 19:26:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:25.403 "params": { 00:25:25.403 "name": "Nvme0", 00:25:25.403 "trtype": "tcp", 00:25:25.403 "traddr": "10.0.0.2", 00:25:25.403 "adrfam": "ipv4", 00:25:25.403 "trsvcid": "4420", 00:25:25.403 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:25.403 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:25.403 "hdgst": false, 00:25:25.403 "ddgst": false 00:25:25.403 }, 00:25:25.403 "method": "bdev_nvme_attach_controller" 00:25:25.403 }' 00:25:25.403 19:26:02 -- common/autotest_common.sh@1322 -- # asan_lib= 00:25:25.403 19:26:02 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:25:25.403 19:26:02 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:25:25.403 19:26:02 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:25.403 19:26:02 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:25:25.403 19:26:02 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:25:25.403 19:26:02 -- common/autotest_common.sh@1322 -- # asan_lib= 00:25:25.403 19:26:02 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:25:25.403 19:26:02 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:25.403 19:26:02 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:25.403 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:25.403 ... 00:25:25.403 fio-3.35 00:25:25.403 Starting 3 threads 00:25:25.970 [2024-02-14 19:26:03.099023] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:25.970 [2024-02-14 19:26:03.099117] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:31.240 00:25:31.240 filename0: (groupid=0, jobs=1): err= 0: pid=90013: Wed Feb 14 19:26:08 2024 00:25:31.240 read: IOPS=259, BW=32.4MiB/s (34.0MB/s)(162MiB/5003msec) 00:25:31.240 slat (nsec): min=6070, max=54818, avg=12702.38, stdev=5537.63 00:25:31.240 clat (usec): min=2858, max=52682, avg=11553.49, stdev=11094.00 00:25:31.240 lat (usec): min=2870, max=52713, avg=11566.20, stdev=11094.19 00:25:31.240 clat percentiles (usec): 00:25:31.240 | 1.00th=[ 4228], 5.00th=[ 5473], 10.00th=[ 5932], 20.00th=[ 6325], 00:25:31.240 | 30.00th=[ 6652], 40.00th=[ 7111], 50.00th=[ 8848], 60.00th=[ 9896], 00:25:31.240 | 70.00th=[10552], 80.00th=[11076], 90.00th=[11863], 95.00th=[47973], 00:25:31.240 | 99.00th=[51119], 99.50th=[52167], 99.90th=[52167], 99.95th=[52691], 00:25:31.240 | 99.99th=[52691] 00:25:31.240 bw ( KiB/s): min=21504, max=45568, per=29.75%, avg=32995.56, stdev=8270.07, samples=9 00:25:31.240 iops : min= 168, max= 356, avg=257.78, stdev=64.61, samples=9 00:25:31.241 lat (msec) : 4=0.85%, 10=59.83%, 20=31.46%, 50=5.09%, 100=2.78% 00:25:31.241 cpu : usr=94.54%, sys=3.88%, ctx=8, majf=0, minf=0 00:25:31.241 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:31.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.241 issued rwts: total=1297,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:31.241 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:31.241 filename0: (groupid=0, jobs=1): err= 0: pid=90014: Wed Feb 14 19:26:08 2024 00:25:31.241 read: IOPS=233, BW=29.2MiB/s (30.6MB/s)(146MiB/5004msec) 00:25:31.241 slat (nsec): min=5803, max=54013, avg=14343.39, stdev=7466.39 00:25:31.241 clat (usec): min=3033, max=51503, avg=12839.77, stdev=13472.82 00:25:31.241 lat (usec): min=3042, max=51522, avg=12854.11, stdev=13472.87 00:25:31.241 clat percentiles (usec): 00:25:31.241 | 1.00th=[ 3392], 5.00th=[ 5604], 10.00th=[ 6259], 20.00th=[ 6652], 00:25:31.241 | 30.00th=[ 7242], 40.00th=[ 7898], 50.00th=[ 8356], 60.00th=[ 8717], 00:25:31.241 | 70.00th=[ 8979], 80.00th=[ 9372], 90.00th=[46924], 95.00th=[49021], 00:25:31.241 | 99.00th=[50070], 99.50th=[50594], 99.90th=[51643], 99.95th=[51643], 00:25:31.241 | 99.99th=[51643] 00:25:31.241 bw ( KiB/s): min=19968, max=35072, per=25.85%, avg=28672.00, stdev=5735.77, samples=9 00:25:31.241 iops : min= 156, max= 274, avg=224.00, stdev=44.81, samples=9 00:25:31.241 lat (msec) : 4=2.57%, 10=83.63%, 20=1.46%, 50=10.88%, 100=1.46% 00:25:31.241 cpu : usr=95.94%, sys=3.00%, ctx=5, majf=0, minf=0 00:25:31.241 IO depths : 1=5.2%, 2=94.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:31.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.241 issued rwts: total=1167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:31.241 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:31.241 filename0: (groupid=0, jobs=1): err= 0: pid=90015: Wed Feb 14 19:26:08 2024 00:25:31.241 read: IOPS=374, BW=46.8MiB/s (49.1MB/s)(234MiB/5002msec) 00:25:31.241 slat (nsec): min=5830, max=67360, avg=9637.52, stdev=5595.38 00:25:31.241 clat (usec): min=3313, max=49899, avg=7992.22, stdev=4070.82 00:25:31.241 lat (usec): min=3320, max=49905, avg=8001.86, stdev=4071.48 00:25:31.241 clat percentiles (usec): 00:25:31.241 | 1.00th=[ 3458], 5.00th=[ 3523], 10.00th=[ 3523], 20.00th=[ 3654], 00:25:31.241 | 30.00th=[ 6783], 40.00th=[ 7242], 50.00th=[ 7701], 60.00th=[ 8225], 00:25:31.241 | 70.00th=[10290], 80.00th=[11338], 90.00th=[11863], 95.00th=[12256], 00:25:31.241 | 99.00th=[13042], 99.50th=[13960], 99.90th=[49546], 99.95th=[50070], 00:25:31.241 | 99.99th=[50070] 00:25:31.241 bw ( KiB/s): min=43776, max=58368, per=44.16%, avg=48981.33, stdev=4914.27, samples=9 00:25:31.241 iops : min= 342, max= 456, avg=382.67, stdev=38.39, samples=9 00:25:31.241 lat (msec) : 4=22.65%, 10=46.69%, 20=30.18%, 50=0.48% 00:25:31.241 cpu : usr=92.86%, sys=5.20%, ctx=5, majf=0, minf=9 00:25:31.241 IO depths : 1=31.6%, 2=68.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:31.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.241 issued rwts: total=1872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:31.241 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:31.241 00:25:31.241 Run status group 0 (all jobs): 00:25:31.241 READ: bw=108MiB/s (114MB/s), 29.2MiB/s-46.8MiB/s (30.6MB/s-49.1MB/s), io=542MiB (568MB), run=5002-5004msec 00:25:31.241 19:26:08 -- target/dif.sh@107 -- # destroy_subsystems 0 00:25:31.241 19:26:08 -- target/dif.sh@43 -- # local sub 00:25:31.241 19:26:08 -- target/dif.sh@45 -- # for sub in "$@" 00:25:31.241 19:26:08 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:31.241 19:26:08 -- target/dif.sh@36 -- # local sub_id=0 00:25:31.241 19:26:08 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:31.241 19:26:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.241 19:26:08 -- common/autotest_common.sh@10 -- # set +x 00:25:31.241 19:26:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.241 19:26:08 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:31.241 19:26:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.241 19:26:08 -- common/autotest_common.sh@10 -- # set +x 00:25:31.241 19:26:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.241 19:26:08 -- target/dif.sh@109 -- # NULL_DIF=2 00:25:31.241 19:26:08 -- target/dif.sh@109 -- # bs=4k 00:25:31.241 19:26:08 -- target/dif.sh@109 -- # numjobs=8 00:25:31.241 19:26:08 -- target/dif.sh@109 -- # iodepth=16 00:25:31.241 19:26:08 -- target/dif.sh@109 -- # runtime= 00:25:31.241 19:26:08 -- target/dif.sh@109 -- # files=2 00:25:31.241 19:26:08 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:25:31.241 19:26:08 -- target/dif.sh@28 -- # local sub 00:25:31.241 19:26:08 -- target/dif.sh@30 -- # for sub in "$@" 00:25:31.241 19:26:08 -- target/dif.sh@31 -- # create_subsystem 0 00:25:31.241 19:26:08 -- target/dif.sh@18 -- # local sub_id=0 00:25:31.241 19:26:08 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:25:31.241 19:26:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.241 19:26:08 -- common/autotest_common.sh@10 -- # set +x 00:25:31.241 bdev_null0 00:25:31.241 19:26:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.241 19:26:08 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:31.241 19:26:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.241 19:26:08 -- common/autotest_common.sh@10 -- # set +x 00:25:31.241 19:26:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.241 19:26:08 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:31.241 19:26:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.241 19:26:08 -- common/autotest_common.sh@10 -- # set +x 00:25:31.241 19:26:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.241 19:26:08 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:31.241 19:26:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.241 19:26:08 -- common/autotest_common.sh@10 -- # set +x 00:25:31.241 [2024-02-14 19:26:08.559613] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:31.241 19:26:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.241 19:26:08 -- target/dif.sh@30 -- # for sub in "$@" 00:25:31.241 19:26:08 -- target/dif.sh@31 -- # create_subsystem 1 00:25:31.241 19:26:08 -- target/dif.sh@18 -- # local sub_id=1 00:25:31.241 19:26:08 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:25:31.241 19:26:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.241 19:26:08 -- common/autotest_common.sh@10 -- # set +x 00:25:31.241 bdev_null1 00:25:31.241 19:26:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.241 19:26:08 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:31.241 19:26:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.241 19:26:08 -- common/autotest_common.sh@10 -- # set +x 00:25:31.241 19:26:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.241 19:26:08 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:31.241 19:26:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.241 19:26:08 -- common/autotest_common.sh@10 -- # set +x 00:25:31.241 19:26:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.241 19:26:08 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:31.241 19:26:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.241 19:26:08 -- common/autotest_common.sh@10 -- # set +x 00:25:31.241 19:26:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.241 19:26:08 -- target/dif.sh@30 -- # for sub in "$@" 00:25:31.241 19:26:08 -- target/dif.sh@31 -- # create_subsystem 2 00:25:31.241 19:26:08 -- target/dif.sh@18 -- # local sub_id=2 00:25:31.241 19:26:08 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:25:31.241 19:26:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.241 19:26:08 -- common/autotest_common.sh@10 -- # set +x 00:25:31.241 bdev_null2 00:25:31.241 19:26:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.241 19:26:08 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:25:31.241 19:26:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.241 19:26:08 -- common/autotest_common.sh@10 -- # set +x 00:25:31.241 19:26:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.241 19:26:08 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:25:31.241 19:26:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.241 19:26:08 -- common/autotest_common.sh@10 -- # set +x 00:25:31.241 19:26:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.241 19:26:08 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:31.241 19:26:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.241 19:26:08 -- common/autotest_common.sh@10 -- # set +x 00:25:31.241 19:26:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.241 19:26:08 -- target/dif.sh@112 -- # fio /dev/fd/62 00:25:31.241 19:26:08 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:25:31.241 19:26:08 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:25:31.241 19:26:08 -- nvmf/common.sh@520 -- # config=() 00:25:31.241 19:26:08 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:31.241 19:26:08 -- nvmf/common.sh@520 -- # local subsystem config 00:25:31.241 19:26:08 -- common/autotest_common.sh@1333 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:31.241 19:26:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:31.241 19:26:08 -- target/dif.sh@82 -- # gen_fio_conf 00:25:31.241 19:26:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:31.241 { 00:25:31.241 "params": { 00:25:31.241 "name": "Nvme$subsystem", 00:25:31.241 "trtype": "$TEST_TRANSPORT", 00:25:31.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.241 "adrfam": "ipv4", 00:25:31.241 "trsvcid": "$NVMF_PORT", 00:25:31.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.242 "hdgst": ${hdgst:-false}, 00:25:31.242 "ddgst": ${ddgst:-false} 00:25:31.242 }, 00:25:31.242 "method": "bdev_nvme_attach_controller" 00:25:31.242 } 00:25:31.242 EOF 00:25:31.242 )") 00:25:31.242 19:26:08 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:25:31.242 19:26:08 -- target/dif.sh@54 -- # local file 00:25:31.242 19:26:08 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:31.242 19:26:08 -- target/dif.sh@56 -- # cat 00:25:31.242 19:26:08 -- common/autotest_common.sh@1316 -- # local sanitizers 00:25:31.242 19:26:08 -- common/autotest_common.sh@1317 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:31.242 19:26:08 -- common/autotest_common.sh@1318 -- # shift 00:25:31.242 19:26:08 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:25:31.242 19:26:08 -- nvmf/common.sh@542 -- # cat 00:25:31.242 19:26:08 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:25:31.242 19:26:08 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:31.242 19:26:08 -- common/autotest_common.sh@1322 -- # grep libasan 00:25:31.242 19:26:08 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:31.242 19:26:08 -- target/dif.sh@72 -- # (( file <= files )) 00:25:31.242 19:26:08 -- target/dif.sh@73 -- # cat 00:25:31.242 19:26:08 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:25:31.242 19:26:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:31.242 19:26:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:31.242 { 00:25:31.242 "params": { 00:25:31.242 "name": "Nvme$subsystem", 00:25:31.242 "trtype": "$TEST_TRANSPORT", 00:25:31.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.242 "adrfam": "ipv4", 00:25:31.242 "trsvcid": "$NVMF_PORT", 00:25:31.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.242 "hdgst": ${hdgst:-false}, 00:25:31.242 "ddgst": ${ddgst:-false} 00:25:31.242 }, 00:25:31.242 "method": "bdev_nvme_attach_controller" 00:25:31.242 } 00:25:31.242 EOF 00:25:31.242 )") 00:25:31.242 19:26:08 -- target/dif.sh@72 -- # (( file++ )) 00:25:31.242 19:26:08 -- target/dif.sh@72 -- # (( file <= files )) 00:25:31.242 19:26:08 -- target/dif.sh@73 -- # cat 00:25:31.242 19:26:08 -- nvmf/common.sh@542 -- # cat 00:25:31.242 19:26:08 -- target/dif.sh@72 -- # (( file++ )) 00:25:31.242 19:26:08 -- target/dif.sh@72 -- # (( file <= files )) 00:25:31.242 19:26:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:31.242 19:26:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:31.242 { 00:25:31.242 "params": { 00:25:31.242 "name": "Nvme$subsystem", 00:25:31.242 "trtype": "$TEST_TRANSPORT", 00:25:31.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.242 "adrfam": "ipv4", 00:25:31.242 "trsvcid": "$NVMF_PORT", 00:25:31.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.242 "hdgst": ${hdgst:-false}, 00:25:31.242 "ddgst": ${ddgst:-false} 00:25:31.242 }, 00:25:31.242 "method": "bdev_nvme_attach_controller" 00:25:31.242 } 00:25:31.242 EOF 00:25:31.242 )") 00:25:31.242 19:26:08 -- nvmf/common.sh@542 -- # cat 00:25:31.242 19:26:08 -- nvmf/common.sh@544 -- # jq . 00:25:31.242 19:26:08 -- nvmf/common.sh@545 -- # IFS=, 00:25:31.242 19:26:08 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:31.242 "params": { 00:25:31.242 "name": "Nvme0", 00:25:31.242 "trtype": "tcp", 00:25:31.242 "traddr": "10.0.0.2", 00:25:31.242 "adrfam": "ipv4", 00:25:31.242 "trsvcid": "4420", 00:25:31.242 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:31.242 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:31.242 "hdgst": false, 00:25:31.242 "ddgst": false 00:25:31.242 }, 00:25:31.242 "method": "bdev_nvme_attach_controller" 00:25:31.242 },{ 00:25:31.242 "params": { 00:25:31.242 "name": "Nvme1", 00:25:31.242 "trtype": "tcp", 00:25:31.242 "traddr": "10.0.0.2", 00:25:31.242 "adrfam": "ipv4", 00:25:31.242 "trsvcid": "4420", 00:25:31.242 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:31.242 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:31.242 "hdgst": false, 00:25:31.242 "ddgst": false 00:25:31.242 }, 00:25:31.242 "method": "bdev_nvme_attach_controller" 00:25:31.242 },{ 00:25:31.242 "params": { 00:25:31.242 "name": "Nvme2", 00:25:31.242 "trtype": "tcp", 00:25:31.242 "traddr": "10.0.0.2", 00:25:31.242 "adrfam": "ipv4", 00:25:31.242 "trsvcid": "4420", 00:25:31.242 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:31.242 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:31.242 "hdgst": false, 00:25:31.242 "ddgst": false 00:25:31.242 }, 00:25:31.242 "method": "bdev_nvme_attach_controller" 00:25:31.242 }' 00:25:31.501 19:26:08 -- common/autotest_common.sh@1322 -- # asan_lib= 00:25:31.502 19:26:08 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:25:31.502 19:26:08 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:25:31.502 19:26:08 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:31.502 19:26:08 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:25:31.502 19:26:08 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:25:31.502 19:26:08 -- common/autotest_common.sh@1322 -- # asan_lib= 00:25:31.502 19:26:08 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:25:31.502 19:26:08 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:31.502 19:26:08 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:31.502 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:31.502 ... 00:25:31.502 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:31.502 ... 00:25:31.502 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:31.502 ... 00:25:31.502 fio-3.35 00:25:31.502 Starting 24 threads 00:25:32.438 [2024-02-14 19:26:09.521114] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:32.438 [2024-02-14 19:26:09.521816] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:42.407 00:25:42.407 filename0: (groupid=0, jobs=1): err= 0: pid=90111: Wed Feb 14 19:26:19 2024 00:25:42.407 read: IOPS=303, BW=1213KiB/s (1242kB/s)(11.9MiB/10017msec) 00:25:42.407 slat (usec): min=3, max=8019, avg=14.56, stdev=150.00 00:25:42.407 clat (msec): min=6, max=120, avg=52.66, stdev=17.05 00:25:42.407 lat (msec): min=6, max=120, avg=52.68, stdev=17.05 00:25:42.407 clat percentiles (msec): 00:25:42.407 | 1.00th=[ 8], 5.00th=[ 31], 10.00th=[ 35], 20.00th=[ 37], 00:25:42.407 | 30.00th=[ 45], 40.00th=[ 47], 50.00th=[ 49], 60.00th=[ 59], 00:25:42.407 | 70.00th=[ 61], 80.00th=[ 69], 90.00th=[ 74], 95.00th=[ 84], 00:25:42.407 | 99.00th=[ 96], 99.50th=[ 99], 99.90th=[ 122], 99.95th=[ 122], 00:25:42.407 | 99.99th=[ 122] 00:25:42.407 bw ( KiB/s): min= 944, max= 1584, per=4.57%, avg=1210.25, stdev=183.31, samples=20 00:25:42.407 iops : min= 236, max= 396, avg=302.55, stdev=45.83, samples=20 00:25:42.407 lat (msec) : 10=1.58%, 50=51.93%, 100=46.13%, 250=0.36% 00:25:42.407 cpu : usr=34.49%, sys=0.58%, ctx=1006, majf=0, minf=9 00:25:42.407 IO depths : 1=0.4%, 2=0.9%, 4=6.5%, 8=78.7%, 16=13.6%, 32=0.0%, >=64=0.0% 00:25:42.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.407 complete : 0=0.0%, 4=89.3%, 8=6.5%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.407 issued rwts: total=3037,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.407 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:42.407 filename0: (groupid=0, jobs=1): err= 0: pid=90112: Wed Feb 14 19:26:19 2024 00:25:42.407 read: IOPS=259, BW=1039KiB/s (1064kB/s)(10.2MiB/10014msec) 00:25:42.407 slat (usec): min=6, max=8029, avg=18.45, stdev=222.22 00:25:42.407 clat (msec): min=22, max=130, avg=61.49, stdev=16.52 00:25:42.407 lat (msec): min=22, max=130, avg=61.51, stdev=16.52 00:25:42.407 clat percentiles (msec): 00:25:42.407 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 44], 20.00th=[ 48], 00:25:42.407 | 30.00th=[ 58], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 61], 00:25:42.407 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 88], 00:25:42.407 | 99.00th=[ 109], 99.50th=[ 118], 99.90th=[ 131], 99.95th=[ 131], 00:25:42.407 | 99.99th=[ 131] 00:25:42.407 bw ( KiB/s): min= 768, max= 1392, per=3.90%, avg=1033.35, stdev=145.38, samples=20 00:25:42.407 iops : min= 192, max= 348, avg=258.30, stdev=36.31, samples=20 00:25:42.407 lat (msec) : 50=25.18%, 100=72.51%, 250=2.31% 00:25:42.407 cpu : usr=33.82%, sys=0.41%, ctx=882, majf=0, minf=9 00:25:42.407 IO depths : 1=1.5%, 2=3.5%, 4=11.6%, 8=71.5%, 16=11.9%, 32=0.0%, >=64=0.0% 00:25:42.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.407 complete : 0=0.0%, 4=90.5%, 8=4.8%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.407 issued rwts: total=2601,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.407 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:42.407 filename0: (groupid=0, jobs=1): err= 0: pid=90113: Wed Feb 14 19:26:19 2024 00:25:42.407 read: IOPS=267, BW=1071KiB/s (1096kB/s)(10.5MiB/10005msec) 00:25:42.407 slat (usec): min=4, max=7970, avg=20.61, stdev=204.25 00:25:42.407 clat (msec): min=16, max=135, avg=59.63, stdev=17.34 00:25:42.407 lat (msec): min=16, max=135, avg=59.65, stdev=17.34 00:25:42.407 clat percentiles (msec): 00:25:42.407 | 1.00th=[ 29], 5.00th=[ 35], 10.00th=[ 41], 20.00th=[ 47], 00:25:42.407 | 30.00th=[ 51], 40.00th=[ 55], 50.00th=[ 58], 60.00th=[ 61], 00:25:42.407 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 92], 00:25:42.407 | 99.00th=[ 115], 99.50th=[ 115], 99.90th=[ 136], 99.95th=[ 136], 00:25:42.407 | 99.99th=[ 136] 00:25:42.407 bw ( KiB/s): min= 768, max= 1408, per=4.01%, avg=1061.79, stdev=167.40, samples=19 00:25:42.407 iops : min= 192, max= 352, avg=265.42, stdev=41.86, samples=19 00:25:42.407 lat (msec) : 20=0.45%, 50=29.42%, 100=67.36%, 250=2.76% 00:25:42.407 cpu : usr=41.56%, sys=0.64%, ctx=1268, majf=0, minf=9 00:25:42.407 IO depths : 1=1.7%, 2=3.9%, 4=11.8%, 8=70.3%, 16=12.3%, 32=0.0%, >=64=0.0% 00:25:42.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.407 complete : 0=0.0%, 4=90.6%, 8=5.2%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.407 issued rwts: total=2678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.407 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:42.407 filename0: (groupid=0, jobs=1): err= 0: pid=90114: Wed Feb 14 19:26:19 2024 00:25:42.407 read: IOPS=254, BW=1017KiB/s (1042kB/s)(9.94MiB/10006msec) 00:25:42.407 slat (usec): min=4, max=7732, avg=18.12, stdev=172.59 00:25:42.407 clat (msec): min=7, max=143, avg=62.77, stdev=18.32 00:25:42.407 lat (msec): min=7, max=143, avg=62.79, stdev=18.32 00:25:42.407 clat percentiles (msec): 00:25:42.407 | 1.00th=[ 20], 5.00th=[ 36], 10.00th=[ 42], 20.00th=[ 50], 00:25:42.407 | 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 64], 00:25:42.407 | 70.00th=[ 70], 80.00th=[ 73], 90.00th=[ 86], 95.00th=[ 96], 00:25:42.407 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 144], 99.95th=[ 144], 00:25:42.407 | 99.99th=[ 144] 00:25:42.407 bw ( KiB/s): min= 768, max= 1384, per=3.79%, avg=1004.26, stdev=131.06, samples=19 00:25:42.407 iops : min= 192, max= 346, avg=251.05, stdev=32.78, samples=19 00:25:42.407 lat (msec) : 10=0.63%, 20=0.39%, 50=21.89%, 100=73.56%, 250=3.54% 00:25:42.407 cpu : usr=37.07%, sys=0.46%, ctx=1035, majf=0, minf=9 00:25:42.407 IO depths : 1=2.0%, 2=5.1%, 4=15.7%, 8=66.1%, 16=11.1%, 32=0.0%, >=64=0.0% 00:25:42.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.407 complete : 0=0.0%, 4=91.7%, 8=3.3%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.407 issued rwts: total=2545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.407 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:42.407 filename0: (groupid=0, jobs=1): err= 0: pid=90115: Wed Feb 14 19:26:19 2024 00:25:42.407 read: IOPS=271, BW=1085KiB/s (1111kB/s)(10.6MiB/10009msec) 00:25:42.407 slat (usec): min=4, max=8028, avg=22.04, stdev=212.63 00:25:42.407 clat (msec): min=22, max=120, avg=58.82, stdev=17.16 00:25:42.407 lat (msec): min=22, max=120, avg=58.84, stdev=17.16 00:25:42.407 clat percentiles (msec): 00:25:42.407 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 45], 00:25:42.407 | 30.00th=[ 50], 40.00th=[ 55], 50.00th=[ 58], 60.00th=[ 61], 00:25:42.407 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 82], 95.00th=[ 90], 00:25:42.407 | 99.00th=[ 112], 99.50th=[ 112], 99.90th=[ 121], 99.95th=[ 121], 00:25:42.407 | 99.99th=[ 121] 00:25:42.407 bw ( KiB/s): min= 816, max= 1488, per=4.08%, avg=1080.00, stdev=174.09, samples=20 00:25:42.407 iops : min= 204, max= 372, avg=270.00, stdev=43.52, samples=20 00:25:42.407 lat (msec) : 50=31.22%, 100=66.24%, 250=2.54% 00:25:42.407 cpu : usr=43.18%, sys=0.63%, ctx=1255, majf=0, minf=9 00:25:42.407 IO depths : 1=1.7%, 2=3.9%, 4=12.0%, 8=70.7%, 16=11.8%, 32=0.0%, >=64=0.0% 00:25:42.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.407 complete : 0=0.0%, 4=90.8%, 8=4.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.407 issued rwts: total=2716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.407 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:42.407 filename0: (groupid=0, jobs=1): err= 0: pid=90116: Wed Feb 14 19:26:19 2024 00:25:42.408 read: IOPS=295, BW=1182KiB/s (1211kB/s)(11.6MiB/10045msec) 00:25:42.408 slat (usec): min=3, max=8028, avg=14.36, stdev=147.28 00:25:42.408 clat (msec): min=8, max=110, avg=53.93, stdev=16.44 00:25:42.408 lat (msec): min=8, max=110, avg=53.95, stdev=16.44 00:25:42.408 clat percentiles (msec): 00:25:42.408 | 1.00th=[ 11], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 39], 00:25:42.408 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 53], 60.00th=[ 60], 00:25:42.408 | 70.00th=[ 61], 80.00th=[ 69], 90.00th=[ 73], 95.00th=[ 85], 00:25:42.408 | 99.00th=[ 99], 99.50th=[ 107], 99.90th=[ 111], 99.95th=[ 111], 00:25:42.408 | 99.99th=[ 111] 00:25:42.408 bw ( KiB/s): min= 896, max= 1424, per=4.46%, avg=1181.05, stdev=143.76, samples=20 00:25:42.408 iops : min= 224, max= 356, avg=295.25, stdev=35.95, samples=20 00:25:42.408 lat (msec) : 10=0.54%, 20=0.54%, 50=45.64%, 100=52.41%, 250=0.88% 00:25:42.408 cpu : usr=33.78%, sys=0.42%, ctx=959, majf=0, minf=9 00:25:42.408 IO depths : 1=0.5%, 2=1.5%, 4=8.1%, 8=76.9%, 16=13.0%, 32=0.0%, >=64=0.0% 00:25:42.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.408 complete : 0=0.0%, 4=89.6%, 8=5.9%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.408 issued rwts: total=2969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:42.408 filename0: (groupid=0, jobs=1): err= 0: pid=90117: Wed Feb 14 19:26:19 2024 00:25:42.408 read: IOPS=282, BW=1132KiB/s (1159kB/s)(11.1MiB/10027msec) 00:25:42.408 slat (usec): min=6, max=4027, avg=13.38, stdev=75.75 00:25:42.408 clat (msec): min=24, max=128, avg=56.42, stdev=17.30 00:25:42.408 lat (msec): min=24, max=128, avg=56.43, stdev=17.30 00:25:42.408 clat percentiles (msec): 00:25:42.408 | 1.00th=[ 29], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 40], 00:25:42.408 | 30.00th=[ 47], 40.00th=[ 51], 50.00th=[ 56], 60.00th=[ 59], 00:25:42.408 | 70.00th=[ 62], 80.00th=[ 71], 90.00th=[ 81], 95.00th=[ 89], 00:25:42.408 | 99.00th=[ 110], 99.50th=[ 113], 99.90th=[ 114], 99.95th=[ 114], 00:25:42.408 | 99.99th=[ 129] 00:25:42.408 bw ( KiB/s): min= 728, max= 1376, per=4.27%, avg=1130.20, stdev=157.43, samples=20 00:25:42.408 iops : min= 182, max= 344, avg=282.50, stdev=39.39, samples=20 00:25:42.408 lat (msec) : 50=40.29%, 100=57.42%, 250=2.29% 00:25:42.408 cpu : usr=36.77%, sys=0.55%, ctx=1066, majf=0, minf=9 00:25:42.408 IO depths : 1=1.1%, 2=2.3%, 4=9.1%, 8=74.8%, 16=12.7%, 32=0.0%, >=64=0.0% 00:25:42.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.408 complete : 0=0.0%, 4=89.7%, 8=6.0%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.408 issued rwts: total=2837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:42.408 filename0: (groupid=0, jobs=1): err= 0: pid=90118: Wed Feb 14 19:26:19 2024 00:25:42.408 read: IOPS=268, BW=1074KiB/s (1100kB/s)(10.5MiB/10006msec) 00:25:42.408 slat (usec): min=5, max=10018, avg=29.43, stdev=337.27 00:25:42.408 clat (msec): min=12, max=137, avg=59.35, stdev=16.93 00:25:42.408 lat (msec): min=12, max=137, avg=59.38, stdev=16.93 00:25:42.408 clat percentiles (msec): 00:25:42.408 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 38], 20.00th=[ 46], 00:25:42.408 | 30.00th=[ 51], 40.00th=[ 56], 50.00th=[ 60], 60.00th=[ 62], 00:25:42.408 | 70.00th=[ 67], 80.00th=[ 72], 90.00th=[ 83], 95.00th=[ 87], 00:25:42.408 | 99.00th=[ 107], 99.50th=[ 109], 99.90th=[ 138], 99.95th=[ 138], 00:25:42.408 | 99.99th=[ 138] 00:25:42.408 bw ( KiB/s): min= 856, max= 1512, per=4.04%, avg=1070.32, stdev=169.21, samples=19 00:25:42.408 iops : min= 214, max= 378, avg=267.58, stdev=42.30, samples=19 00:25:42.408 lat (msec) : 20=0.67%, 50=28.67%, 100=69.02%, 250=1.64% 00:25:42.408 cpu : usr=39.76%, sys=0.52%, ctx=1122, majf=0, minf=9 00:25:42.408 IO depths : 1=1.6%, 2=3.7%, 4=12.0%, 8=71.0%, 16=11.7%, 32=0.0%, >=64=0.0% 00:25:42.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.408 complete : 0=0.0%, 4=90.6%, 8=4.5%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.408 issued rwts: total=2686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:42.408 filename1: (groupid=0, jobs=1): err= 0: pid=90119: Wed Feb 14 19:26:19 2024 00:25:42.408 read: IOPS=244, BW=978KiB/s (1002kB/s)(9808KiB/10024msec) 00:25:42.408 slat (usec): min=4, max=8029, avg=24.96, stdev=268.44 00:25:42.408 clat (msec): min=23, max=146, avg=65.24, stdev=16.65 00:25:42.408 lat (msec): min=23, max=146, avg=65.27, stdev=16.64 00:25:42.408 clat percentiles (msec): 00:25:42.408 | 1.00th=[ 35], 5.00th=[ 37], 10.00th=[ 47], 20.00th=[ 54], 00:25:42.408 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 68], 00:25:42.408 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 86], 95.00th=[ 95], 00:25:42.408 | 99.00th=[ 111], 99.50th=[ 113], 99.90th=[ 146], 99.95th=[ 146], 00:25:42.408 | 99.99th=[ 146] 00:25:42.408 bw ( KiB/s): min= 768, max= 1296, per=3.68%, avg=974.20, stdev=122.28, samples=20 00:25:42.408 iops : min= 192, max= 324, avg=243.50, stdev=30.61, samples=20 00:25:42.408 lat (msec) : 50=16.72%, 100=80.30%, 250=2.98% 00:25:42.408 cpu : usr=36.25%, sys=0.62%, ctx=1027, majf=0, minf=9 00:25:42.408 IO depths : 1=2.2%, 2=5.5%, 4=16.2%, 8=65.3%, 16=10.8%, 32=0.0%, >=64=0.0% 00:25:42.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.408 complete : 0=0.0%, 4=91.6%, 8=3.3%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.408 issued rwts: total=2452,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:42.408 filename1: (groupid=0, jobs=1): err= 0: pid=90120: Wed Feb 14 19:26:19 2024 00:25:42.408 read: IOPS=259, BW=1036KiB/s (1061kB/s)(10.1MiB/10001msec) 00:25:42.408 slat (usec): min=3, max=4091, avg=17.06, stdev=99.12 00:25:42.408 clat (msec): min=8, max=125, avg=61.63, stdev=17.28 00:25:42.408 lat (msec): min=8, max=125, avg=61.65, stdev=17.28 00:25:42.408 clat percentiles (msec): 00:25:42.408 | 1.00th=[ 27], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 48], 00:25:42.408 | 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 63], 00:25:42.408 | 70.00th=[ 68], 80.00th=[ 75], 90.00th=[ 83], 95.00th=[ 91], 00:25:42.408 | 99.00th=[ 117], 99.50th=[ 118], 99.90th=[ 127], 99.95th=[ 127], 00:25:42.408 | 99.99th=[ 127] 00:25:42.408 bw ( KiB/s): min= 640, max= 1368, per=3.89%, avg=1030.11, stdev=159.65, samples=19 00:25:42.408 iops : min= 160, max= 342, avg=257.53, stdev=39.91, samples=19 00:25:42.408 lat (msec) : 10=0.62%, 50=23.04%, 100=73.87%, 250=2.47% 00:25:42.408 cpu : usr=38.61%, sys=0.53%, ctx=1217, majf=0, minf=9 00:25:42.408 IO depths : 1=2.5%, 2=5.8%, 4=15.6%, 8=65.5%, 16=10.6%, 32=0.0%, >=64=0.0% 00:25:42.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.408 complete : 0=0.0%, 4=91.7%, 8=3.3%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.408 issued rwts: total=2591,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:42.408 filename1: (groupid=0, jobs=1): err= 0: pid=90121: Wed Feb 14 19:26:19 2024 00:25:42.408 read: IOPS=287, BW=1149KiB/s (1176kB/s)(11.2MiB/10026msec) 00:25:42.408 slat (usec): min=5, max=8030, avg=17.99, stdev=211.35 00:25:42.408 clat (msec): min=21, max=131, avg=55.55, stdev=16.44 00:25:42.408 lat (msec): min=21, max=131, avg=55.57, stdev=16.44 00:25:42.408 clat percentiles (msec): 00:25:42.408 | 1.00th=[ 25], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 40], 00:25:42.408 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 57], 60.00th=[ 61], 00:25:42.408 | 70.00th=[ 62], 80.00th=[ 68], 90.00th=[ 74], 95.00th=[ 84], 00:25:42.408 | 99.00th=[ 108], 99.50th=[ 110], 99.90th=[ 132], 99.95th=[ 132], 00:25:42.408 | 99.99th=[ 132] 00:25:42.408 bw ( KiB/s): min= 856, max= 1408, per=4.33%, avg=1145.05, stdev=160.60, samples=20 00:25:42.408 iops : min= 214, max= 352, avg=286.25, stdev=40.15, samples=20 00:25:42.408 lat (msec) : 50=40.47%, 100=58.15%, 250=1.39% 00:25:42.408 cpu : usr=37.35%, sys=0.45%, ctx=1054, majf=0, minf=9 00:25:42.408 IO depths : 1=1.0%, 2=2.2%, 4=8.6%, 8=75.2%, 16=12.9%, 32=0.0%, >=64=0.0% 00:25:42.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.408 complete : 0=0.0%, 4=89.9%, 8=5.9%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.408 issued rwts: total=2879,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:42.408 filename1: (groupid=0, jobs=1): err= 0: pid=90122: Wed Feb 14 19:26:19 2024 00:25:42.408 read: IOPS=259, BW=1036KiB/s (1061kB/s)(10.1MiB/10003msec) 00:25:42.408 slat (usec): min=5, max=9026, avg=22.71, stdev=284.42 00:25:42.408 clat (msec): min=3, max=120, avg=61.60, stdev=17.92 00:25:42.408 lat (msec): min=3, max=120, avg=61.62, stdev=17.93 00:25:42.408 clat percentiles (msec): 00:25:42.408 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 40], 20.00th=[ 50], 00:25:42.408 | 30.00th=[ 55], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 64], 00:25:42.408 | 70.00th=[ 69], 80.00th=[ 74], 90.00th=[ 84], 95.00th=[ 93], 00:25:42.408 | 99.00th=[ 112], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 121], 00:25:42.408 | 99.99th=[ 121] 00:25:42.408 bw ( KiB/s): min= 640, max= 1584, per=3.84%, avg=1016.63, stdev=173.73, samples=19 00:25:42.408 iops : min= 160, max= 396, avg=254.16, stdev=43.43, samples=19 00:25:42.408 lat (msec) : 4=0.35%, 10=0.27%, 20=0.35%, 50=20.49%, 100=75.72% 00:25:42.408 lat (msec) : 250=2.82% 00:25:42.408 cpu : usr=39.63%, sys=0.49%, ctx=1052, majf=0, minf=9 00:25:42.408 IO depths : 1=2.2%, 2=5.3%, 4=15.6%, 8=65.8%, 16=11.0%, 32=0.0%, >=64=0.0% 00:25:42.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.408 complete : 0=0.0%, 4=91.7%, 8=3.4%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.408 issued rwts: total=2591,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:42.408 filename1: (groupid=0, jobs=1): err= 0: pid=90123: Wed Feb 14 19:26:19 2024 00:25:42.408 read: IOPS=309, BW=1238KiB/s (1268kB/s)(12.1MiB/10043msec) 00:25:42.408 slat (usec): min=6, max=8026, avg=15.68, stdev=159.18 00:25:42.408 clat (msec): min=2, max=137, avg=51.56, stdev=17.93 00:25:42.408 lat (msec): min=2, max=137, avg=51.58, stdev=17.93 00:25:42.408 clat percentiles (msec): 00:25:42.408 | 1.00th=[ 4], 5.00th=[ 25], 10.00th=[ 32], 20.00th=[ 37], 00:25:42.408 | 30.00th=[ 41], 40.00th=[ 46], 50.00th=[ 51], 60.00th=[ 58], 00:25:42.408 | 70.00th=[ 61], 80.00th=[ 65], 90.00th=[ 73], 95.00th=[ 84], 00:25:42.408 | 99.00th=[ 96], 99.50th=[ 103], 99.90th=[ 138], 99.95th=[ 138], 00:25:42.408 | 99.99th=[ 138] 00:25:42.408 bw ( KiB/s): min= 944, max= 2176, per=4.67%, avg=1236.80, stdev=262.72, samples=20 00:25:42.408 iops : min= 236, max= 544, avg=309.20, stdev=65.68, samples=20 00:25:42.408 lat (msec) : 4=1.03%, 10=1.99%, 20=0.06%, 50=46.24%, 100=49.97% 00:25:42.409 lat (msec) : 250=0.71% 00:25:42.409 cpu : usr=40.56%, sys=0.55%, ctx=1183, majf=0, minf=9 00:25:42.409 IO depths : 1=1.0%, 2=2.3%, 4=9.6%, 8=74.4%, 16=12.8%, 32=0.0%, >=64=0.0% 00:25:42.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.409 complete : 0=0.0%, 4=89.8%, 8=5.8%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.409 issued rwts: total=3108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.409 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:42.409 filename1: (groupid=0, jobs=1): err= 0: pid=90124: Wed Feb 14 19:26:19 2024 00:25:42.409 read: IOPS=275, BW=1101KiB/s (1128kB/s)(10.8MiB/10043msec) 00:25:42.409 slat (usec): min=4, max=8064, avg=21.96, stdev=275.04 00:25:42.409 clat (msec): min=18, max=131, avg=57.93, stdev=16.20 00:25:42.409 lat (msec): min=18, max=131, avg=57.95, stdev=16.20 00:25:42.409 clat percentiles (msec): 00:25:42.409 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 45], 00:25:42.409 | 30.00th=[ 49], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 61], 00:25:42.409 | 70.00th=[ 64], 80.00th=[ 72], 90.00th=[ 80], 95.00th=[ 85], 00:25:42.409 | 99.00th=[ 101], 99.50th=[ 108], 99.90th=[ 132], 99.95th=[ 132], 00:25:42.409 | 99.99th=[ 132] 00:25:42.409 bw ( KiB/s): min= 816, max= 1336, per=4.15%, avg=1099.60, stdev=141.12, samples=20 00:25:42.409 iops : min= 204, max= 334, avg=274.85, stdev=35.26, samples=20 00:25:42.409 lat (msec) : 20=0.22%, 50=31.21%, 100=67.45%, 250=1.12% 00:25:42.409 cpu : usr=35.64%, sys=0.52%, ctx=991, majf=0, minf=9 00:25:42.409 IO depths : 1=1.0%, 2=2.0%, 4=7.9%, 8=76.5%, 16=12.6%, 32=0.0%, >=64=0.0% 00:25:42.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.409 complete : 0=0.0%, 4=89.8%, 8=5.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.409 issued rwts: total=2765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.409 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:42.409 filename1: (groupid=0, jobs=1): err= 0: pid=90125: Wed Feb 14 19:26:19 2024 00:25:42.409 read: IOPS=250, BW=1004KiB/s (1028kB/s)(9.81MiB/10008msec) 00:25:42.409 slat (usec): min=5, max=8033, avg=16.03, stdev=160.26 00:25:42.409 clat (msec): min=9, max=140, avg=63.63, stdev=17.67 00:25:42.409 lat (msec): min=9, max=140, avg=63.64, stdev=17.67 00:25:42.409 clat percentiles (msec): 00:25:42.409 | 1.00th=[ 27], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 50], 00:25:42.409 | 30.00th=[ 58], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 63], 00:25:42.409 | 70.00th=[ 72], 80.00th=[ 74], 90.00th=[ 85], 95.00th=[ 96], 00:25:42.409 | 99.00th=[ 118], 99.50th=[ 130], 99.90th=[ 142], 99.95th=[ 142], 00:25:42.409 | 99.99th=[ 142] 00:25:42.409 bw ( KiB/s): min= 768, max= 1328, per=3.75%, avg=992.00, stdev=123.01, samples=19 00:25:42.409 iops : min= 192, max= 332, avg=248.00, stdev=30.75, samples=19 00:25:42.409 lat (msec) : 10=0.24%, 50=21.30%, 100=75.52%, 250=2.95% 00:25:42.409 cpu : usr=33.81%, sys=0.40%, ctx=876, majf=0, minf=9 00:25:42.409 IO depths : 1=2.3%, 2=5.0%, 4=15.6%, 8=66.6%, 16=10.5%, 32=0.0%, >=64=0.0% 00:25:42.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.409 complete : 0=0.0%, 4=91.1%, 8=3.6%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.409 issued rwts: total=2512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.409 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:42.409 filename1: (groupid=0, jobs=1): err= 0: pid=90126: Wed Feb 14 19:26:19 2024 00:25:42.409 read: IOPS=256, BW=1027KiB/s (1052kB/s)(10.0MiB/10011msec) 00:25:42.409 slat (usec): min=4, max=4860, avg=17.27, stdev=147.51 00:25:42.409 clat (msec): min=22, max=135, avg=62.20, stdev=18.10 00:25:42.409 lat (msec): min=22, max=135, avg=62.22, stdev=18.10 00:25:42.409 clat percentiles (msec): 00:25:42.409 | 1.00th=[ 28], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 48], 00:25:42.409 | 30.00th=[ 53], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 64], 00:25:42.409 | 70.00th=[ 70], 80.00th=[ 77], 90.00th=[ 85], 95.00th=[ 93], 00:25:42.409 | 99.00th=[ 118], 99.50th=[ 132], 99.90th=[ 136], 99.95th=[ 136], 00:25:42.409 | 99.99th=[ 136] 00:25:42.409 bw ( KiB/s): min= 768, max= 1280, per=3.83%, avg=1014.74, stdev=154.88, samples=19 00:25:42.409 iops : min= 192, max= 320, avg=253.68, stdev=38.72, samples=19 00:25:42.409 lat (msec) : 50=25.95%, 100=71.25%, 250=2.80% 00:25:42.409 cpu : usr=38.13%, sys=0.67%, ctx=1062, majf=0, minf=9 00:25:42.409 IO depths : 1=2.2%, 2=5.1%, 4=14.7%, 8=66.8%, 16=11.1%, 32=0.0%, >=64=0.0% 00:25:42.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.409 complete : 0=0.0%, 4=91.4%, 8=3.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.409 issued rwts: total=2570,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.409 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:42.409 filename2: (groupid=0, jobs=1): err= 0: pid=90127: Wed Feb 14 19:26:19 2024 00:25:42.409 read: IOPS=301, BW=1207KiB/s (1236kB/s)(11.8MiB/10031msec) 00:25:42.409 slat (usec): min=4, max=8023, avg=17.35, stdev=206.03 00:25:42.409 clat (msec): min=11, max=122, avg=52.89, stdev=17.63 00:25:42.409 lat (msec): min=11, max=122, avg=52.91, stdev=17.63 00:25:42.409 clat percentiles (msec): 00:25:42.409 | 1.00th=[ 15], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 37], 00:25:42.409 | 30.00th=[ 42], 40.00th=[ 48], 50.00th=[ 50], 60.00th=[ 57], 00:25:42.409 | 70.00th=[ 61], 80.00th=[ 62], 90.00th=[ 72], 95.00th=[ 87], 00:25:42.409 | 99.00th=[ 111], 99.50th=[ 111], 99.90th=[ 123], 99.95th=[ 123], 00:25:42.409 | 99.99th=[ 123] 00:25:42.409 bw ( KiB/s): min= 768, max= 1472, per=4.55%, avg=1204.00, stdev=201.53, samples=20 00:25:42.409 iops : min= 192, max= 368, avg=301.00, stdev=50.38, samples=20 00:25:42.409 lat (msec) : 20=1.06%, 50=52.51%, 100=43.99%, 250=2.45% 00:25:42.409 cpu : usr=33.92%, sys=0.50%, ctx=889, majf=0, minf=9 00:25:42.409 IO depths : 1=1.1%, 2=2.5%, 4=9.3%, 8=74.4%, 16=12.8%, 32=0.0%, >=64=0.0% 00:25:42.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.409 complete : 0=0.0%, 4=89.8%, 8=6.0%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.409 issued rwts: total=3026,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.409 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:42.409 filename2: (groupid=0, jobs=1): err= 0: pid=90128: Wed Feb 14 19:26:19 2024 00:25:42.409 read: IOPS=259, BW=1036KiB/s (1061kB/s)(10.1MiB/10010msec) 00:25:42.409 slat (usec): min=4, max=8036, avg=28.92, stdev=289.31 00:25:42.409 clat (msec): min=13, max=117, avg=61.57, stdev=16.14 00:25:42.409 lat (msec): min=13, max=117, avg=61.60, stdev=16.14 00:25:42.409 clat percentiles (msec): 00:25:42.409 | 1.00th=[ 25], 5.00th=[ 37], 10.00th=[ 45], 20.00th=[ 52], 00:25:42.409 | 30.00th=[ 55], 40.00th=[ 57], 50.00th=[ 60], 60.00th=[ 63], 00:25:42.409 | 70.00th=[ 67], 80.00th=[ 73], 90.00th=[ 83], 95.00th=[ 88], 00:25:42.409 | 99.00th=[ 112], 99.50th=[ 115], 99.90th=[ 117], 99.95th=[ 117], 00:25:42.409 | 99.99th=[ 117] 00:25:42.409 bw ( KiB/s): min= 768, max= 1408, per=3.87%, avg=1024.42, stdev=138.86, samples=19 00:25:42.409 iops : min= 192, max= 352, avg=256.11, stdev=34.71, samples=19 00:25:42.409 lat (msec) : 20=0.58%, 50=18.09%, 100=78.33%, 250=3.01% 00:25:42.409 cpu : usr=43.61%, sys=0.73%, ctx=1313, majf=0, minf=9 00:25:42.409 IO depths : 1=2.5%, 2=5.5%, 4=15.0%, 8=65.9%, 16=11.1%, 32=0.0%, >=64=0.0% 00:25:42.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.409 complete : 0=0.0%, 4=91.6%, 8=3.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.409 issued rwts: total=2593,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.409 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:42.409 filename2: (groupid=0, jobs=1): err= 0: pid=90129: Wed Feb 14 19:26:19 2024 00:25:42.409 read: IOPS=286, BW=1147KiB/s (1175kB/s)(11.2MiB/10005msec) 00:25:42.409 slat (usec): min=4, max=8019, avg=18.36, stdev=183.40 00:25:42.409 clat (msec): min=9, max=119, avg=55.68, stdev=17.57 00:25:42.409 lat (msec): min=9, max=119, avg=55.70, stdev=17.57 00:25:42.409 clat percentiles (msec): 00:25:42.409 | 1.00th=[ 15], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 41], 00:25:42.409 | 30.00th=[ 46], 40.00th=[ 51], 50.00th=[ 56], 60.00th=[ 60], 00:25:42.409 | 70.00th=[ 62], 80.00th=[ 68], 90.00th=[ 82], 95.00th=[ 88], 00:25:42.409 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 121], 99.95th=[ 121], 00:25:42.409 | 99.99th=[ 121] 00:25:42.409 bw ( KiB/s): min= 848, max= 1536, per=4.34%, avg=1147.47, stdev=184.68, samples=19 00:25:42.409 iops : min= 212, max= 384, avg=286.84, stdev=46.16, samples=19 00:25:42.409 lat (msec) : 10=0.49%, 20=0.63%, 50=38.79%, 100=58.49%, 250=1.60% 00:25:42.409 cpu : usr=41.15%, sys=0.62%, ctx=1194, majf=0, minf=9 00:25:42.409 IO depths : 1=1.4%, 2=3.0%, 4=10.1%, 8=73.3%, 16=12.2%, 32=0.0%, >=64=0.0% 00:25:42.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.409 complete : 0=0.0%, 4=90.1%, 8=5.4%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.409 issued rwts: total=2869,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.409 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:42.409 filename2: (groupid=0, jobs=1): err= 0: pid=90130: Wed Feb 14 19:26:19 2024 00:25:42.409 read: IOPS=337, BW=1351KiB/s (1383kB/s)(13.2MiB/10034msec) 00:25:42.409 slat (usec): min=3, max=5018, avg=14.71, stdev=124.47 00:25:42.409 clat (msec): min=7, max=118, avg=47.18, stdev=15.93 00:25:42.409 lat (msec): min=7, max=118, avg=47.19, stdev=15.93 00:25:42.409 clat percentiles (msec): 00:25:42.409 | 1.00th=[ 16], 5.00th=[ 27], 10.00th=[ 33], 20.00th=[ 36], 00:25:42.409 | 30.00th=[ 39], 40.00th=[ 41], 50.00th=[ 45], 60.00th=[ 48], 00:25:42.409 | 70.00th=[ 54], 80.00th=[ 59], 90.00th=[ 67], 95.00th=[ 81], 00:25:42.409 | 99.00th=[ 93], 99.50th=[ 110], 99.90th=[ 118], 99.95th=[ 118], 00:25:42.409 | 99.99th=[ 118] 00:25:42.409 bw ( KiB/s): min= 896, max= 1872, per=5.10%, avg=1348.95, stdev=215.62, samples=20 00:25:42.409 iops : min= 224, max= 468, avg=337.20, stdev=53.87, samples=20 00:25:42.409 lat (msec) : 10=0.94%, 20=1.71%, 50=62.46%, 100=34.09%, 250=0.80% 00:25:42.409 cpu : usr=42.84%, sys=0.60%, ctx=1449, majf=0, minf=9 00:25:42.409 IO depths : 1=0.5%, 2=1.0%, 4=6.4%, 8=78.6%, 16=13.5%, 32=0.0%, >=64=0.0% 00:25:42.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.409 complete : 0=0.0%, 4=89.2%, 8=6.8%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.409 issued rwts: total=3388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.409 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:42.409 filename2: (groupid=0, jobs=1): err= 0: pid=90131: Wed Feb 14 19:26:19 2024 00:25:42.409 read: IOPS=307, BW=1229KiB/s (1259kB/s)(12.0MiB/10036msec) 00:25:42.409 slat (usec): min=4, max=8023, avg=15.14, stdev=161.56 00:25:42.409 clat (msec): min=7, max=120, avg=51.90, stdev=16.78 00:25:42.409 lat (msec): min=7, max=120, avg=51.92, stdev=16.77 00:25:42.409 clat percentiles (msec): 00:25:42.410 | 1.00th=[ 20], 5.00th=[ 31], 10.00th=[ 35], 20.00th=[ 36], 00:25:42.410 | 30.00th=[ 41], 40.00th=[ 47], 50.00th=[ 48], 60.00th=[ 58], 00:25:42.410 | 70.00th=[ 61], 80.00th=[ 63], 90.00th=[ 72], 95.00th=[ 84], 00:25:42.410 | 99.00th=[ 99], 99.50th=[ 108], 99.90th=[ 121], 99.95th=[ 121], 00:25:42.410 | 99.99th=[ 121] 00:25:42.410 bw ( KiB/s): min= 976, max= 1712, per=4.64%, avg=1227.20, stdev=195.00, samples=20 00:25:42.410 iops : min= 244, max= 428, avg=306.80, stdev=48.75, samples=20 00:25:42.410 lat (msec) : 10=0.52%, 20=0.71%, 50=52.17%, 100=45.62%, 250=0.97% 00:25:42.410 cpu : usr=35.95%, sys=0.47%, ctx=946, majf=0, minf=9 00:25:42.410 IO depths : 1=0.4%, 2=0.8%, 4=6.2%, 8=79.1%, 16=13.6%, 32=0.0%, >=64=0.0% 00:25:42.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.410 complete : 0=0.0%, 4=89.2%, 8=6.9%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.410 issued rwts: total=3084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.410 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:42.410 filename2: (groupid=0, jobs=1): err= 0: pid=90132: Wed Feb 14 19:26:19 2024 00:25:42.410 read: IOPS=255, BW=1021KiB/s (1046kB/s)(9.98MiB/10010msec) 00:25:42.410 slat (usec): min=4, max=8027, avg=19.85, stdev=224.27 00:25:42.410 clat (msec): min=25, max=129, avg=62.50, stdev=16.60 00:25:42.410 lat (msec): min=25, max=129, avg=62.52, stdev=16.60 00:25:42.410 clat percentiles (msec): 00:25:42.410 | 1.00th=[ 28], 5.00th=[ 36], 10.00th=[ 42], 20.00th=[ 50], 00:25:42.410 | 30.00th=[ 57], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 63], 00:25:42.410 | 70.00th=[ 71], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 96], 00:25:42.410 | 99.00th=[ 118], 99.50th=[ 127], 99.90th=[ 130], 99.95th=[ 130], 00:25:42.410 | 99.99th=[ 130] 00:25:42.410 bw ( KiB/s): min= 768, max= 1384, per=3.84%, avg=1016.00, stdev=142.97, samples=20 00:25:42.410 iops : min= 192, max= 346, avg=254.00, stdev=35.74, samples=20 00:25:42.410 lat (msec) : 50=21.75%, 100=75.00%, 250=3.25% 00:25:42.410 cpu : usr=34.00%, sys=0.41%, ctx=885, majf=0, minf=9 00:25:42.410 IO depths : 1=1.9%, 2=4.5%, 4=14.6%, 8=67.7%, 16=11.3%, 32=0.0%, >=64=0.0% 00:25:42.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.410 complete : 0=0.0%, 4=91.0%, 8=4.1%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.410 issued rwts: total=2556,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.410 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:42.410 filename2: (groupid=0, jobs=1): err= 0: pid=90133: Wed Feb 14 19:26:19 2024 00:25:42.410 read: IOPS=263, BW=1055KiB/s (1080kB/s)(10.3MiB/10027msec) 00:25:42.410 slat (usec): min=6, max=8026, avg=24.02, stdev=254.13 00:25:42.410 clat (msec): min=15, max=146, avg=60.47, stdev=16.62 00:25:42.410 lat (msec): min=15, max=146, avg=60.49, stdev=16.62 00:25:42.410 clat percentiles (msec): 00:25:42.410 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 48], 00:25:42.410 | 30.00th=[ 55], 40.00th=[ 57], 50.00th=[ 59], 60.00th=[ 63], 00:25:42.410 | 70.00th=[ 67], 80.00th=[ 72], 90.00th=[ 83], 95.00th=[ 89], 00:25:42.410 | 99.00th=[ 110], 99.50th=[ 113], 99.90th=[ 146], 99.95th=[ 146], 00:25:42.410 | 99.99th=[ 146] 00:25:42.410 bw ( KiB/s): min= 856, max= 1408, per=3.97%, avg=1051.05, stdev=124.01, samples=20 00:25:42.410 iops : min= 214, max= 352, avg=262.75, stdev=31.00, samples=20 00:25:42.410 lat (msec) : 20=0.57%, 50=23.78%, 100=73.46%, 250=2.19% 00:25:42.410 cpu : usr=43.59%, sys=0.60%, ctx=1292, majf=0, minf=9 00:25:42.410 IO depths : 1=1.9%, 2=4.6%, 4=13.8%, 8=68.2%, 16=11.4%, 32=0.0%, >=64=0.0% 00:25:42.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.410 complete : 0=0.0%, 4=91.2%, 8=4.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.410 issued rwts: total=2645,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.410 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:42.410 filename2: (groupid=0, jobs=1): err= 0: pid=90134: Wed Feb 14 19:26:19 2024 00:25:42.410 read: IOPS=273, BW=1095KiB/s (1121kB/s)(10.7MiB/10021msec) 00:25:42.410 slat (usec): min=3, max=8018, avg=20.19, stdev=202.37 00:25:42.410 clat (msec): min=23, max=129, avg=58.28, stdev=16.58 00:25:42.410 lat (msec): min=23, max=129, avg=58.30, stdev=16.59 00:25:42.410 clat percentiles (msec): 00:25:42.410 | 1.00th=[ 26], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 45], 00:25:42.410 | 30.00th=[ 51], 40.00th=[ 55], 50.00th=[ 57], 60.00th=[ 61], 00:25:42.410 | 70.00th=[ 64], 80.00th=[ 69], 90.00th=[ 82], 95.00th=[ 89], 00:25:42.410 | 99.00th=[ 108], 99.50th=[ 122], 99.90th=[ 130], 99.95th=[ 130], 00:25:42.410 | 99.99th=[ 130] 00:25:42.410 bw ( KiB/s): min= 888, max= 1488, per=4.14%, avg=1094.45, stdev=172.73, samples=20 00:25:42.410 iops : min= 222, max= 372, avg=273.60, stdev=43.18, samples=20 00:25:42.410 lat (msec) : 50=30.01%, 100=67.65%, 250=2.33% 00:25:42.410 cpu : usr=44.11%, sys=0.64%, ctx=1332, majf=0, minf=9 00:25:42.410 IO depths : 1=2.2%, 2=4.8%, 4=13.6%, 8=68.2%, 16=11.2%, 32=0.0%, >=64=0.0% 00:25:42.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.410 complete : 0=0.0%, 4=90.9%, 8=4.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.410 issued rwts: total=2742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.410 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:42.410 00:25:42.410 Run status group 0 (all jobs): 00:25:42.410 READ: bw=25.8MiB/s (27.1MB/s), 978KiB/s-1351KiB/s (1002kB/s-1383kB/s), io=260MiB (272MB), run=10001-10045msec 00:25:42.669 19:26:20 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:25:42.669 19:26:20 -- target/dif.sh@43 -- # local sub 00:25:42.669 19:26:20 -- target/dif.sh@45 -- # for sub in "$@" 00:25:42.669 19:26:20 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:42.669 19:26:20 -- target/dif.sh@36 -- # local sub_id=0 00:25:42.669 19:26:20 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:42.669 19:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.669 19:26:20 -- common/autotest_common.sh@10 -- # set +x 00:25:42.669 19:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.669 19:26:20 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:42.669 19:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.669 19:26:20 -- common/autotest_common.sh@10 -- # set +x 00:25:42.669 19:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.669 19:26:20 -- target/dif.sh@45 -- # for sub in "$@" 00:25:42.669 19:26:20 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:42.669 19:26:20 -- target/dif.sh@36 -- # local sub_id=1 00:25:42.669 19:26:20 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:42.669 19:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.669 19:26:20 -- common/autotest_common.sh@10 -- # set +x 00:25:42.669 19:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.669 19:26:20 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:42.669 19:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.669 19:26:20 -- common/autotest_common.sh@10 -- # set +x 00:25:42.669 19:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.669 19:26:20 -- target/dif.sh@45 -- # for sub in "$@" 00:25:42.669 19:26:20 -- target/dif.sh@46 -- # destroy_subsystem 2 00:25:42.669 19:26:20 -- target/dif.sh@36 -- # local sub_id=2 00:25:42.669 19:26:20 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:42.669 19:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.669 19:26:20 -- common/autotest_common.sh@10 -- # set +x 00:25:42.669 19:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.669 19:26:20 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:25:42.669 19:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.669 19:26:20 -- common/autotest_common.sh@10 -- # set +x 00:25:42.669 19:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.669 19:26:20 -- target/dif.sh@115 -- # NULL_DIF=1 00:25:42.669 19:26:20 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:25:42.669 19:26:20 -- target/dif.sh@115 -- # numjobs=2 00:25:42.669 19:26:20 -- target/dif.sh@115 -- # iodepth=8 00:25:42.669 19:26:20 -- target/dif.sh@115 -- # runtime=5 00:25:42.669 19:26:20 -- target/dif.sh@115 -- # files=1 00:25:42.669 19:26:20 -- target/dif.sh@117 -- # create_subsystems 0 1 00:25:42.669 19:26:20 -- target/dif.sh@28 -- # local sub 00:25:42.669 19:26:20 -- target/dif.sh@30 -- # for sub in "$@" 00:25:42.669 19:26:20 -- target/dif.sh@31 -- # create_subsystem 0 00:25:42.669 19:26:20 -- target/dif.sh@18 -- # local sub_id=0 00:25:42.669 19:26:20 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:42.669 19:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.669 19:26:20 -- common/autotest_common.sh@10 -- # set +x 00:25:42.669 bdev_null0 00:25:42.669 19:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.669 19:26:20 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:42.669 19:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.669 19:26:20 -- common/autotest_common.sh@10 -- # set +x 00:25:42.669 19:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.669 19:26:20 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:42.669 19:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.669 19:26:20 -- common/autotest_common.sh@10 -- # set +x 00:25:42.669 19:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.929 19:26:20 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:42.929 19:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.929 19:26:20 -- common/autotest_common.sh@10 -- # set +x 00:25:42.929 [2024-02-14 19:26:20.089237] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:42.929 19:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.929 19:26:20 -- target/dif.sh@30 -- # for sub in "$@" 00:25:42.929 19:26:20 -- target/dif.sh@31 -- # create_subsystem 1 00:25:42.929 19:26:20 -- target/dif.sh@18 -- # local sub_id=1 00:25:42.929 19:26:20 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:42.929 19:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.929 19:26:20 -- common/autotest_common.sh@10 -- # set +x 00:25:42.929 bdev_null1 00:25:42.929 19:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.929 19:26:20 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:42.929 19:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.929 19:26:20 -- common/autotest_common.sh@10 -- # set +x 00:25:42.929 19:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.929 19:26:20 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:42.929 19:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.929 19:26:20 -- common/autotest_common.sh@10 -- # set +x 00:25:42.929 19:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.929 19:26:20 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:42.929 19:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.929 19:26:20 -- common/autotest_common.sh@10 -- # set +x 00:25:42.929 19:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.929 19:26:20 -- target/dif.sh@118 -- # fio /dev/fd/62 00:25:42.929 19:26:20 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:25:42.929 19:26:20 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:42.929 19:26:20 -- nvmf/common.sh@520 -- # config=() 00:25:42.929 19:26:20 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:42.929 19:26:20 -- nvmf/common.sh@520 -- # local subsystem config 00:25:42.929 19:26:20 -- common/autotest_common.sh@1333 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:42.929 19:26:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:42.929 19:26:20 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:25:42.929 19:26:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:42.929 { 00:25:42.929 "params": { 00:25:42.929 "name": "Nvme$subsystem", 00:25:42.929 "trtype": "$TEST_TRANSPORT", 00:25:42.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:42.929 "adrfam": "ipv4", 00:25:42.929 "trsvcid": "$NVMF_PORT", 00:25:42.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:42.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:42.929 "hdgst": ${hdgst:-false}, 00:25:42.929 "ddgst": ${ddgst:-false} 00:25:42.929 }, 00:25:42.929 "method": "bdev_nvme_attach_controller" 00:25:42.929 } 00:25:42.929 EOF 00:25:42.929 )") 00:25:42.929 19:26:20 -- target/dif.sh@82 -- # gen_fio_conf 00:25:42.929 19:26:20 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:42.929 19:26:20 -- target/dif.sh@54 -- # local file 00:25:42.929 19:26:20 -- common/autotest_common.sh@1316 -- # local sanitizers 00:25:42.929 19:26:20 -- target/dif.sh@56 -- # cat 00:25:42.929 19:26:20 -- common/autotest_common.sh@1317 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:42.929 19:26:20 -- common/autotest_common.sh@1318 -- # shift 00:25:42.929 19:26:20 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:25:42.929 19:26:20 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:25:42.929 19:26:20 -- nvmf/common.sh@542 -- # cat 00:25:42.929 19:26:20 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:42.929 19:26:20 -- common/autotest_common.sh@1322 -- # grep libasan 00:25:42.929 19:26:20 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:42.929 19:26:20 -- target/dif.sh@72 -- # (( file <= files )) 00:25:42.929 19:26:20 -- target/dif.sh@73 -- # cat 00:25:42.929 19:26:20 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:25:42.929 19:26:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:42.929 19:26:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:42.929 { 00:25:42.929 "params": { 00:25:42.929 "name": "Nvme$subsystem", 00:25:42.929 "trtype": "$TEST_TRANSPORT", 00:25:42.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:42.929 "adrfam": "ipv4", 00:25:42.929 "trsvcid": "$NVMF_PORT", 00:25:42.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:42.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:42.929 "hdgst": ${hdgst:-false}, 00:25:42.929 "ddgst": ${ddgst:-false} 00:25:42.929 }, 00:25:42.929 "method": "bdev_nvme_attach_controller" 00:25:42.929 } 00:25:42.929 EOF 00:25:42.929 )") 00:25:42.929 19:26:20 -- nvmf/common.sh@542 -- # cat 00:25:42.929 19:26:20 -- target/dif.sh@72 -- # (( file++ )) 00:25:42.929 19:26:20 -- target/dif.sh@72 -- # (( file <= files )) 00:25:42.929 19:26:20 -- nvmf/common.sh@544 -- # jq . 00:25:42.929 19:26:20 -- nvmf/common.sh@545 -- # IFS=, 00:25:42.929 19:26:20 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:42.929 "params": { 00:25:42.929 "name": "Nvme0", 00:25:42.929 "trtype": "tcp", 00:25:42.929 "traddr": "10.0.0.2", 00:25:42.929 "adrfam": "ipv4", 00:25:42.929 "trsvcid": "4420", 00:25:42.929 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:42.929 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:42.929 "hdgst": false, 00:25:42.929 "ddgst": false 00:25:42.929 }, 00:25:42.929 "method": "bdev_nvme_attach_controller" 00:25:42.929 },{ 00:25:42.929 "params": { 00:25:42.929 "name": "Nvme1", 00:25:42.929 "trtype": "tcp", 00:25:42.929 "traddr": "10.0.0.2", 00:25:42.929 "adrfam": "ipv4", 00:25:42.929 "trsvcid": "4420", 00:25:42.929 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:42.929 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:42.929 "hdgst": false, 00:25:42.929 "ddgst": false 00:25:42.929 }, 00:25:42.929 "method": "bdev_nvme_attach_controller" 00:25:42.929 }' 00:25:42.929 19:26:20 -- common/autotest_common.sh@1322 -- # asan_lib= 00:25:42.929 19:26:20 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:25:42.929 19:26:20 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:25:42.929 19:26:20 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:42.929 19:26:20 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:25:42.929 19:26:20 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:25:42.929 19:26:20 -- common/autotest_common.sh@1322 -- # asan_lib= 00:25:42.929 19:26:20 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:25:42.929 19:26:20 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:42.929 19:26:20 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:42.929 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:42.929 ... 00:25:42.929 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:42.929 ... 00:25:42.929 fio-3.35 00:25:42.929 Starting 4 threads 00:25:43.497 [2024-02-14 19:26:20.823888] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:43.497 [2024-02-14 19:26:20.823949] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:48.788 00:25:48.788 filename0: (groupid=0, jobs=1): err= 0: pid=90267: Wed Feb 14 19:26:25 2024 00:25:48.788 read: IOPS=2311, BW=18.1MiB/s (18.9MB/s)(90.3MiB/5001msec) 00:25:48.788 slat (usec): min=6, max=102, avg=14.12, stdev= 6.54 00:25:48.788 clat (usec): min=396, max=5686, avg=3394.15, stdev=194.65 00:25:48.788 lat (usec): min=405, max=5699, avg=3408.26, stdev=194.94 00:25:48.788 clat percentiles (usec): 00:25:48.788 | 1.00th=[ 2966], 5.00th=[ 3195], 10.00th=[ 3261], 20.00th=[ 3294], 00:25:48.788 | 30.00th=[ 3326], 40.00th=[ 3359], 50.00th=[ 3392], 60.00th=[ 3392], 00:25:48.788 | 70.00th=[ 3425], 80.00th=[ 3458], 90.00th=[ 3556], 95.00th=[ 3621], 00:25:48.788 | 99.00th=[ 3916], 99.50th=[ 4228], 99.90th=[ 5080], 99.95th=[ 5211], 00:25:48.788 | 99.99th=[ 5538] 00:25:48.788 bw ( KiB/s): min=18139, max=18704, per=24.98%, avg=18470.56, stdev=166.19, samples=9 00:25:48.788 iops : min= 2267, max= 2338, avg=2308.78, stdev=20.87, samples=9 00:25:48.788 lat (usec) : 500=0.01%, 1000=0.06% 00:25:48.788 lat (msec) : 2=0.03%, 4=99.10%, 10=0.80% 00:25:48.788 cpu : usr=94.74%, sys=3.90%, ctx=12, majf=0, minf=9 00:25:48.788 IO depths : 1=7.3%, 2=25.0%, 4=50.0%, 8=17.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:48.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.788 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.788 issued rwts: total=11560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.788 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:48.788 filename0: (groupid=0, jobs=1): err= 0: pid=90268: Wed Feb 14 19:26:25 2024 00:25:48.788 read: IOPS=2310, BW=18.0MiB/s (18.9MB/s)(90.3MiB/5002msec) 00:25:48.788 slat (nsec): min=6067, max=74804, avg=12285.59, stdev=7308.08 00:25:48.788 clat (usec): min=849, max=5324, avg=3411.45, stdev=218.11 00:25:48.788 lat (usec): min=855, max=5338, avg=3423.73, stdev=218.32 00:25:48.788 clat percentiles (usec): 00:25:48.788 | 1.00th=[ 2835], 5.00th=[ 3195], 10.00th=[ 3261], 20.00th=[ 3326], 00:25:48.788 | 30.00th=[ 3359], 40.00th=[ 3359], 50.00th=[ 3392], 60.00th=[ 3425], 00:25:48.788 | 70.00th=[ 3458], 80.00th=[ 3490], 90.00th=[ 3589], 95.00th=[ 3752], 00:25:48.788 | 99.00th=[ 4047], 99.50th=[ 4359], 99.90th=[ 4948], 99.95th=[ 5080], 00:25:48.788 | 99.99th=[ 5342] 00:25:48.788 bw ( KiB/s): min=18267, max=18688, per=24.99%, avg=18475.89, stdev=135.96, samples=9 00:25:48.788 iops : min= 2283, max= 2336, avg=2309.44, stdev=17.07, samples=9 00:25:48.788 lat (usec) : 1000=0.05% 00:25:48.788 lat (msec) : 2=0.13%, 4=98.56%, 10=1.25% 00:25:48.788 cpu : usr=94.92%, sys=3.80%, ctx=6, majf=0, minf=9 00:25:48.788 IO depths : 1=4.5%, 2=9.6%, 4=65.3%, 8=20.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:48.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.788 complete : 0=0.0%, 4=89.8%, 8=10.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.788 issued rwts: total=11555,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.788 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:48.788 filename1: (groupid=0, jobs=1): err= 0: pid=90269: Wed Feb 14 19:26:25 2024 00:25:48.788 read: IOPS=2314, BW=18.1MiB/s (19.0MB/s)(90.5MiB/5004msec) 00:25:48.788 slat (nsec): min=5751, max=73376, avg=10195.63, stdev=7169.72 00:25:48.788 clat (usec): min=1197, max=4293, avg=3402.81, stdev=160.93 00:25:48.788 lat (usec): min=1204, max=4314, avg=3413.00, stdev=160.99 00:25:48.788 clat percentiles (usec): 00:25:48.788 | 1.00th=[ 3130], 5.00th=[ 3228], 10.00th=[ 3261], 20.00th=[ 3326], 00:25:48.788 | 30.00th=[ 3359], 40.00th=[ 3392], 50.00th=[ 3392], 60.00th=[ 3425], 00:25:48.788 | 70.00th=[ 3458], 80.00th=[ 3490], 90.00th=[ 3556], 95.00th=[ 3654], 00:25:48.788 | 99.00th=[ 3851], 99.50th=[ 3916], 99.90th=[ 4113], 99.95th=[ 4228], 00:25:48.788 | 99.99th=[ 4293] 00:25:48.788 bw ( KiB/s): min=18395, max=18688, per=25.04%, avg=18513.22, stdev=115.02, samples=9 00:25:48.788 iops : min= 2299, max= 2336, avg=2314.11, stdev=14.43, samples=9 00:25:48.788 lat (msec) : 2=0.25%, 4=99.52%, 10=0.23% 00:25:48.788 cpu : usr=95.04%, sys=3.88%, ctx=7, majf=0, minf=0 00:25:48.788 IO depths : 1=10.5%, 2=24.3%, 4=50.7%, 8=14.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:48.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.788 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.788 issued rwts: total=11584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.788 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:48.788 filename1: (groupid=0, jobs=1): err= 0: pid=90270: Wed Feb 14 19:26:25 2024 00:25:48.788 read: IOPS=2309, BW=18.0MiB/s (18.9MB/s)(90.2MiB/5001msec) 00:25:48.788 slat (usec): min=6, max=101, avg=14.45, stdev= 6.95 00:25:48.788 clat (usec): min=1788, max=5605, avg=3393.26, stdev=162.89 00:25:48.788 lat (usec): min=1798, max=5617, avg=3407.71, stdev=163.12 00:25:48.788 clat percentiles (usec): 00:25:48.788 | 1.00th=[ 3097], 5.00th=[ 3228], 10.00th=[ 3261], 20.00th=[ 3294], 00:25:48.788 | 30.00th=[ 3326], 40.00th=[ 3359], 50.00th=[ 3392], 60.00th=[ 3392], 00:25:48.788 | 70.00th=[ 3425], 80.00th=[ 3458], 90.00th=[ 3556], 95.00th=[ 3621], 00:25:48.788 | 99.00th=[ 3916], 99.50th=[ 4113], 99.90th=[ 4817], 99.95th=[ 5473], 00:25:48.788 | 99.99th=[ 5538] 00:25:48.788 bw ( KiB/s): min=18212, max=18688, per=24.99%, avg=18478.67, stdev=148.43, samples=9 00:25:48.788 iops : min= 2276, max= 2336, avg=2309.78, stdev=18.67, samples=9 00:25:48.788 lat (msec) : 2=0.01%, 4=99.19%, 10=0.80% 00:25:48.788 cpu : usr=94.10%, sys=4.60%, ctx=4, majf=0, minf=9 00:25:48.788 IO depths : 1=11.4%, 2=25.0%, 4=50.0%, 8=13.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:48.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.788 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.788 issued rwts: total=11552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.788 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:48.788 00:25:48.788 Run status group 0 (all jobs): 00:25:48.788 READ: bw=72.2MiB/s (75.7MB/s), 18.0MiB/s-18.1MiB/s (18.9MB/s-19.0MB/s), io=361MiB (379MB), run=5001-5004msec 00:25:49.048 19:26:26 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:25:49.048 19:26:26 -- target/dif.sh@43 -- # local sub 00:25:49.048 19:26:26 -- target/dif.sh@45 -- # for sub in "$@" 00:25:49.048 19:26:26 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:49.048 19:26:26 -- target/dif.sh@36 -- # local sub_id=0 00:25:49.048 19:26:26 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:49.048 19:26:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.048 19:26:26 -- common/autotest_common.sh@10 -- # set +x 00:25:49.048 19:26:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.048 19:26:26 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:49.048 19:26:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.048 19:26:26 -- common/autotest_common.sh@10 -- # set +x 00:25:49.048 19:26:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.048 19:26:26 -- target/dif.sh@45 -- # for sub in "$@" 00:25:49.048 19:26:26 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:49.048 19:26:26 -- target/dif.sh@36 -- # local sub_id=1 00:25:49.048 19:26:26 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:49.048 19:26:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.048 19:26:26 -- common/autotest_common.sh@10 -- # set +x 00:25:49.048 19:26:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.048 19:26:26 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:49.048 19:26:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.048 19:26:26 -- common/autotest_common.sh@10 -- # set +x 00:25:49.048 19:26:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.048 00:25:49.048 real 0m23.859s 00:25:49.048 user 2m7.948s 00:25:49.048 sys 0m3.595s 00:25:49.048 19:26:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:49.048 19:26:26 -- common/autotest_common.sh@10 -- # set +x 00:25:49.048 ************************************ 00:25:49.048 END TEST fio_dif_rand_params 00:25:49.048 ************************************ 00:25:49.048 19:26:26 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:25:49.048 19:26:26 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:25:49.048 19:26:26 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:25:49.048 19:26:26 -- common/autotest_common.sh@10 -- # set +x 00:25:49.048 ************************************ 00:25:49.048 START TEST fio_dif_digest 00:25:49.048 ************************************ 00:25:49.049 19:26:26 -- common/autotest_common.sh@1102 -- # fio_dif_digest 00:25:49.049 19:26:26 -- target/dif.sh@123 -- # local NULL_DIF 00:25:49.049 19:26:26 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:25:49.049 19:26:26 -- target/dif.sh@125 -- # local hdgst ddgst 00:25:49.049 19:26:26 -- target/dif.sh@127 -- # NULL_DIF=3 00:25:49.049 19:26:26 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:25:49.049 19:26:26 -- target/dif.sh@127 -- # numjobs=3 00:25:49.049 19:26:26 -- target/dif.sh@127 -- # iodepth=3 00:25:49.049 19:26:26 -- target/dif.sh@127 -- # runtime=10 00:25:49.049 19:26:26 -- target/dif.sh@128 -- # hdgst=true 00:25:49.049 19:26:26 -- target/dif.sh@128 -- # ddgst=true 00:25:49.049 19:26:26 -- target/dif.sh@130 -- # create_subsystems 0 00:25:49.049 19:26:26 -- target/dif.sh@28 -- # local sub 00:25:49.049 19:26:26 -- target/dif.sh@30 -- # for sub in "$@" 00:25:49.049 19:26:26 -- target/dif.sh@31 -- # create_subsystem 0 00:25:49.049 19:26:26 -- target/dif.sh@18 -- # local sub_id=0 00:25:49.049 19:26:26 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:49.049 19:26:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.049 19:26:26 -- common/autotest_common.sh@10 -- # set +x 00:25:49.049 bdev_null0 00:25:49.049 19:26:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.049 19:26:26 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:49.049 19:26:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.049 19:26:26 -- common/autotest_common.sh@10 -- # set +x 00:25:49.049 19:26:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.049 19:26:26 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:49.049 19:26:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.049 19:26:26 -- common/autotest_common.sh@10 -- # set +x 00:25:49.049 19:26:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.049 19:26:26 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:49.049 19:26:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.049 19:26:26 -- common/autotest_common.sh@10 -- # set +x 00:25:49.049 [2024-02-14 19:26:26.352577] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:49.049 19:26:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.049 19:26:26 -- target/dif.sh@131 -- # fio /dev/fd/62 00:25:49.049 19:26:26 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:25:49.049 19:26:26 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:49.049 19:26:26 -- nvmf/common.sh@520 -- # config=() 00:25:49.049 19:26:26 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:49.049 19:26:26 -- nvmf/common.sh@520 -- # local subsystem config 00:25:49.049 19:26:26 -- common/autotest_common.sh@1333 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:49.049 19:26:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:49.049 19:26:26 -- target/dif.sh@82 -- # gen_fio_conf 00:25:49.049 19:26:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:49.049 { 00:25:49.049 "params": { 00:25:49.049 "name": "Nvme$subsystem", 00:25:49.049 "trtype": "$TEST_TRANSPORT", 00:25:49.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:49.049 "adrfam": "ipv4", 00:25:49.049 "trsvcid": "$NVMF_PORT", 00:25:49.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:49.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:49.049 "hdgst": ${hdgst:-false}, 00:25:49.049 "ddgst": ${ddgst:-false} 00:25:49.049 }, 00:25:49.049 "method": "bdev_nvme_attach_controller" 00:25:49.049 } 00:25:49.049 EOF 00:25:49.049 )") 00:25:49.049 19:26:26 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:25:49.049 19:26:26 -- target/dif.sh@54 -- # local file 00:25:49.049 19:26:26 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:49.049 19:26:26 -- target/dif.sh@56 -- # cat 00:25:49.049 19:26:26 -- common/autotest_common.sh@1316 -- # local sanitizers 00:25:49.049 19:26:26 -- common/autotest_common.sh@1317 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:49.049 19:26:26 -- common/autotest_common.sh@1318 -- # shift 00:25:49.049 19:26:26 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:25:49.049 19:26:26 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:25:49.049 19:26:26 -- nvmf/common.sh@542 -- # cat 00:25:49.049 19:26:26 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:49.049 19:26:26 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:49.049 19:26:26 -- target/dif.sh@72 -- # (( file <= files )) 00:25:49.049 19:26:26 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:25:49.049 19:26:26 -- common/autotest_common.sh@1322 -- # grep libasan 00:25:49.049 19:26:26 -- nvmf/common.sh@544 -- # jq . 00:25:49.049 19:26:26 -- nvmf/common.sh@545 -- # IFS=, 00:25:49.049 19:26:26 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:49.049 "params": { 00:25:49.049 "name": "Nvme0", 00:25:49.049 "trtype": "tcp", 00:25:49.049 "traddr": "10.0.0.2", 00:25:49.049 "adrfam": "ipv4", 00:25:49.049 "trsvcid": "4420", 00:25:49.049 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:49.049 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:49.049 "hdgst": true, 00:25:49.049 "ddgst": true 00:25:49.049 }, 00:25:49.049 "method": "bdev_nvme_attach_controller" 00:25:49.049 }' 00:25:49.049 19:26:26 -- common/autotest_common.sh@1322 -- # asan_lib= 00:25:49.049 19:26:26 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:25:49.049 19:26:26 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:25:49.049 19:26:26 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:49.049 19:26:26 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:25:49.049 19:26:26 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:25:49.049 19:26:26 -- common/autotest_common.sh@1322 -- # asan_lib= 00:25:49.049 19:26:26 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:25:49.049 19:26:26 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:49.049 19:26:26 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:49.318 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:49.318 ... 00:25:49.318 fio-3.35 00:25:49.318 Starting 3 threads 00:25:49.617 [2024-02-14 19:26:26.958253] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:49.617 [2024-02-14 19:26:26.958349] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:01.836 00:26:01.836 filename0: (groupid=0, jobs=1): err= 0: pid=90376: Wed Feb 14 19:26:37 2024 00:26:01.836 read: IOPS=221, BW=27.7MiB/s (29.0MB/s)(277MiB/10003msec) 00:26:01.836 slat (nsec): min=5992, max=70990, avg=14102.52, stdev=6841.79 00:26:01.836 clat (usec): min=7528, max=17893, avg=13521.87, stdev=2115.79 00:26:01.836 lat (usec): min=7546, max=17919, avg=13535.97, stdev=2115.40 00:26:01.836 clat percentiles (usec): 00:26:01.836 | 1.00th=[ 8356], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[12911], 00:26:01.836 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:26:01.836 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15270], 95.00th=[15664], 00:26:01.836 | 99.00th=[16581], 99.50th=[16712], 99.90th=[17957], 99.95th=[17957], 00:26:01.836 | 99.99th=[17957] 00:26:01.836 bw ( KiB/s): min=25600, max=31488, per=30.54%, avg=28351.42, stdev=1653.96, samples=19 00:26:01.836 iops : min= 200, max= 246, avg=221.47, stdev=12.94, samples=19 00:26:01.836 lat (msec) : 10=14.76%, 20=85.24% 00:26:01.836 cpu : usr=94.79%, sys=3.82%, ctx=8, majf=0, minf=9 00:26:01.836 IO depths : 1=6.5%, 2=93.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:01.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.836 issued rwts: total=2216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.836 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:01.836 filename0: (groupid=0, jobs=1): err= 0: pid=90377: Wed Feb 14 19:26:37 2024 00:26:01.836 read: IOPS=251, BW=31.4MiB/s (32.9MB/s)(314MiB/10004msec) 00:26:01.836 slat (nsec): min=6159, max=74358, avg=17824.43, stdev=7969.09 00:26:01.836 clat (usec): min=4095, max=16506, avg=11914.43, stdev=2154.23 00:26:01.836 lat (usec): min=4116, max=16513, avg=11932.25, stdev=2154.61 00:26:01.836 clat percentiles (usec): 00:26:01.836 | 1.00th=[ 6849], 5.00th=[ 7373], 10.00th=[ 7832], 20.00th=[10814], 00:26:01.836 | 30.00th=[11731], 40.00th=[12256], 50.00th=[12518], 60.00th=[12911], 00:26:01.836 | 70.00th=[13173], 80.00th=[13566], 90.00th=[13960], 95.00th=[14353], 00:26:01.836 | 99.00th=[15008], 99.50th=[15270], 99.90th=[15926], 99.95th=[16188], 00:26:01.836 | 99.99th=[16450] 00:26:01.836 bw ( KiB/s): min=28985, max=35840, per=34.64%, avg=32151.21, stdev=1959.95, samples=19 00:26:01.836 iops : min= 226, max= 280, avg=251.16, stdev=15.35, samples=19 00:26:01.836 lat (msec) : 10=18.26%, 20=81.74% 00:26:01.836 cpu : usr=94.25%, sys=3.91%, ctx=161, majf=0, minf=9 00:26:01.836 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:01.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.836 issued rwts: total=2514,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.836 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:01.836 filename0: (groupid=0, jobs=1): err= 0: pid=90378: Wed Feb 14 19:26:37 2024 00:26:01.836 read: IOPS=254, BW=31.8MiB/s (33.3MB/s)(319MiB/10043msec) 00:26:01.836 slat (nsec): min=6153, max=83049, avg=14963.69, stdev=6387.78 00:26:01.836 clat (usec): min=7702, max=91615, avg=11767.32, stdev=7606.84 00:26:01.836 lat (usec): min=7713, max=91625, avg=11782.29, stdev=7606.61 00:26:01.836 clat percentiles (usec): 00:26:01.836 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:26:01.836 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:26:01.836 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11469], 95.00th=[12256], 00:26:01.836 | 99.00th=[51119], 99.50th=[51643], 99.90th=[52691], 99.95th=[90702], 00:26:01.836 | 99.99th=[91751] 00:26:01.836 bw ( KiB/s): min=24576, max=37632, per=35.18%, avg=32652.80, stdev=3541.86, samples=20 00:26:01.836 iops : min= 192, max= 294, avg=255.10, stdev=27.67, samples=20 00:26:01.836 lat (msec) : 10=30.43%, 20=66.16%, 50=0.74%, 100=2.66% 00:26:01.836 cpu : usr=94.10%, sys=4.37%, ctx=33, majf=0, minf=9 00:26:01.836 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:01.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.836 issued rwts: total=2553,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.836 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:01.836 00:26:01.836 Run status group 0 (all jobs): 00:26:01.836 READ: bw=90.6MiB/s (95.1MB/s), 27.7MiB/s-31.8MiB/s (29.0MB/s-33.3MB/s), io=910MiB (955MB), run=10003-10043msec 00:26:01.836 19:26:37 -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:01.836 19:26:37 -- target/dif.sh@43 -- # local sub 00:26:01.836 19:26:37 -- target/dif.sh@45 -- # for sub in "$@" 00:26:01.836 19:26:37 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:01.836 19:26:37 -- target/dif.sh@36 -- # local sub_id=0 00:26:01.836 19:26:37 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:01.836 19:26:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.836 19:26:37 -- common/autotest_common.sh@10 -- # set +x 00:26:01.836 19:26:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.836 19:26:37 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:01.836 19:26:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.836 19:26:37 -- common/autotest_common.sh@10 -- # set +x 00:26:01.836 19:26:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.836 00:26:01.836 real 0m11.093s 00:26:01.836 user 0m29.070s 00:26:01.836 sys 0m1.514s 00:26:01.836 19:26:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:01.836 19:26:37 -- common/autotest_common.sh@10 -- # set +x 00:26:01.836 ************************************ 00:26:01.836 END TEST fio_dif_digest 00:26:01.836 ************************************ 00:26:01.836 19:26:37 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:01.836 19:26:37 -- target/dif.sh@147 -- # nvmftestfini 00:26:01.836 19:26:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:01.837 19:26:37 -- nvmf/common.sh@116 -- # sync 00:26:01.837 19:26:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:01.837 19:26:37 -- nvmf/common.sh@119 -- # set +e 00:26:01.837 19:26:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:01.837 19:26:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:01.837 rmmod nvme_tcp 00:26:01.837 rmmod nvme_fabrics 00:26:01.837 rmmod nvme_keyring 00:26:01.837 19:26:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:01.837 19:26:37 -- nvmf/common.sh@123 -- # set -e 00:26:01.837 19:26:37 -- nvmf/common.sh@124 -- # return 0 00:26:01.837 19:26:37 -- nvmf/common.sh@477 -- # '[' -n 89607 ']' 00:26:01.837 19:26:37 -- nvmf/common.sh@478 -- # killprocess 89607 00:26:01.837 19:26:37 -- common/autotest_common.sh@924 -- # '[' -z 89607 ']' 00:26:01.837 19:26:37 -- common/autotest_common.sh@928 -- # kill -0 89607 00:26:01.837 19:26:37 -- common/autotest_common.sh@929 -- # uname 00:26:01.837 19:26:37 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:26:01.837 19:26:37 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 89607 00:26:01.837 19:26:37 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:26:01.837 19:26:37 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:26:01.837 killing process with pid 89607 00:26:01.837 19:26:37 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 89607' 00:26:01.837 19:26:37 -- common/autotest_common.sh@943 -- # kill 89607 00:26:01.837 19:26:37 -- common/autotest_common.sh@948 -- # wait 89607 00:26:01.837 19:26:37 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:01.837 19:26:37 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:01.837 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:01.837 Waiting for block devices as requested 00:26:01.837 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:01.837 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:01.837 19:26:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:01.837 19:26:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:01.837 19:26:38 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:01.837 19:26:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:01.837 19:26:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.837 19:26:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:01.837 19:26:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.837 19:26:38 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:01.837 00:26:01.837 real 1m0.410s 00:26:01.837 user 3m53.646s 00:26:01.837 sys 0m13.427s 00:26:01.837 19:26:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:01.837 19:26:38 -- common/autotest_common.sh@10 -- # set +x 00:26:01.837 ************************************ 00:26:01.837 END TEST nvmf_dif 00:26:01.837 ************************************ 00:26:01.837 19:26:38 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:01.837 19:26:38 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:26:01.837 19:26:38 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:26:01.837 19:26:38 -- common/autotest_common.sh@10 -- # set +x 00:26:01.837 ************************************ 00:26:01.837 START TEST nvmf_abort_qd_sizes 00:26:01.837 ************************************ 00:26:01.837 19:26:38 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:01.837 * Looking for test storage... 00:26:01.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:01.837 19:26:38 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:01.837 19:26:38 -- nvmf/common.sh@7 -- # uname -s 00:26:01.837 19:26:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:01.837 19:26:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:01.837 19:26:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:01.837 19:26:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:01.837 19:26:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:01.837 19:26:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:01.837 19:26:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:01.837 19:26:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:01.837 19:26:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:01.837 19:26:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:01.837 19:26:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef 00:26:01.837 19:26:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=01aa9a6c-5c08-466f-9802-e7b920b153ef 00:26:01.837 19:26:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:01.837 19:26:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:01.837 19:26:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:01.837 19:26:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:01.837 19:26:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:01.837 19:26:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:01.837 19:26:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:01.837 19:26:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.837 19:26:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.837 19:26:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.837 19:26:38 -- paths/export.sh@5 -- # export PATH 00:26:01.837 19:26:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.837 19:26:38 -- nvmf/common.sh@46 -- # : 0 00:26:01.837 19:26:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:01.837 19:26:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:01.837 19:26:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:01.837 19:26:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:01.837 19:26:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:01.837 19:26:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:01.837 19:26:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:01.837 19:26:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:01.837 19:26:38 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:26:01.837 19:26:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:01.837 19:26:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:01.837 19:26:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:01.837 19:26:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:01.837 19:26:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:01.837 19:26:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.837 19:26:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:01.837 19:26:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.837 19:26:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:01.837 19:26:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:01.837 19:26:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:01.837 19:26:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:01.837 19:26:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:01.837 19:26:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:01.837 19:26:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:01.837 19:26:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:01.837 19:26:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:01.837 19:26:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:01.837 19:26:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:01.837 19:26:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:01.837 19:26:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:01.837 19:26:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:01.837 19:26:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:01.837 19:26:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:01.837 19:26:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:01.837 19:26:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:01.837 19:26:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:01.837 19:26:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:01.837 Cannot find device "nvmf_tgt_br" 00:26:01.837 19:26:38 -- nvmf/common.sh@154 -- # true 00:26:01.837 19:26:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:01.837 Cannot find device "nvmf_tgt_br2" 00:26:01.837 19:26:38 -- nvmf/common.sh@155 -- # true 00:26:01.837 19:26:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:01.837 19:26:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:01.837 Cannot find device "nvmf_tgt_br" 00:26:01.837 19:26:38 -- nvmf/common.sh@157 -- # true 00:26:01.837 19:26:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:01.837 Cannot find device "nvmf_tgt_br2" 00:26:01.837 19:26:38 -- nvmf/common.sh@158 -- # true 00:26:01.837 19:26:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:01.837 19:26:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:01.837 19:26:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:01.837 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:01.837 19:26:38 -- nvmf/common.sh@161 -- # true 00:26:01.837 19:26:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:01.837 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:01.837 19:26:38 -- nvmf/common.sh@162 -- # true 00:26:01.837 19:26:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:01.837 19:26:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:01.837 19:26:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:01.838 19:26:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:01.838 19:26:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:01.838 19:26:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:01.838 19:26:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:01.838 19:26:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:01.838 19:26:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:01.838 19:26:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:01.838 19:26:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:01.838 19:26:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:01.838 19:26:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:01.838 19:26:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:01.838 19:26:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:01.838 19:26:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:01.838 19:26:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:01.838 19:26:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:01.838 19:26:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:01.838 19:26:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:01.838 19:26:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:01.838 19:26:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:01.838 19:26:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:01.838 19:26:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:01.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:01.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:26:01.838 00:26:01.838 --- 10.0.0.2 ping statistics --- 00:26:01.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.838 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:26:01.838 19:26:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:01.838 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:01.838 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:26:01.838 00:26:01.838 --- 10.0.0.3 ping statistics --- 00:26:01.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.838 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:26:01.838 19:26:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:01.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:01.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:26:01.838 00:26:01.838 --- 10.0.0.1 ping statistics --- 00:26:01.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.838 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:26:01.838 19:26:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:01.838 19:26:38 -- nvmf/common.sh@421 -- # return 0 00:26:01.838 19:26:38 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:26:01.838 19:26:38 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:02.405 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:02.405 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:26:02.664 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:26:02.664 19:26:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:02.664 19:26:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:02.664 19:26:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:02.664 19:26:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:02.664 19:26:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:02.664 19:26:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:02.664 19:26:39 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:26:02.664 19:26:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:02.664 19:26:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:02.664 19:26:39 -- common/autotest_common.sh@10 -- # set +x 00:26:02.664 19:26:39 -- nvmf/common.sh@469 -- # nvmfpid=90981 00:26:02.664 19:26:39 -- nvmf/common.sh@470 -- # waitforlisten 90981 00:26:02.664 19:26:39 -- common/autotest_common.sh@817 -- # '[' -z 90981 ']' 00:26:02.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:02.664 19:26:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:02.664 19:26:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:02.665 19:26:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:02.665 19:26:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:02.665 19:26:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:02.665 19:26:39 -- common/autotest_common.sh@10 -- # set +x 00:26:02.665 [2024-02-14 19:26:39.971216] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:26:02.665 [2024-02-14 19:26:39.971311] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:02.923 [2024-02-14 19:26:40.113067] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:02.923 [2024-02-14 19:26:40.223218] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:02.923 [2024-02-14 19:26:40.223389] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:02.923 [2024-02-14 19:26:40.223407] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:02.923 [2024-02-14 19:26:40.223419] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:02.923 [2024-02-14 19:26:40.223595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:02.923 [2024-02-14 19:26:40.224644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:02.923 [2024-02-14 19:26:40.224741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:02.923 [2024-02-14 19:26:40.224759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.490 19:26:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:03.490 19:26:40 -- common/autotest_common.sh@850 -- # return 0 00:26:03.490 19:26:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:03.490 19:26:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:03.490 19:26:40 -- common/autotest_common.sh@10 -- # set +x 00:26:03.748 19:26:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:03.748 19:26:40 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:03.748 19:26:40 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:26:03.748 19:26:40 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:26:03.748 19:26:40 -- scripts/common.sh@311 -- # local bdf bdfs 00:26:03.748 19:26:40 -- scripts/common.sh@312 -- # local nvmes 00:26:03.748 19:26:40 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:26:03.748 19:26:40 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:03.748 19:26:40 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:26:03.748 19:26:40 -- scripts/common.sh@297 -- # local bdf= 00:26:03.748 19:26:40 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:26:03.748 19:26:40 -- scripts/common.sh@232 -- # local class 00:26:03.748 19:26:40 -- scripts/common.sh@233 -- # local subclass 00:26:03.748 19:26:40 -- scripts/common.sh@234 -- # local progif 00:26:03.748 19:26:40 -- scripts/common.sh@235 -- # printf %02x 1 00:26:03.748 19:26:40 -- scripts/common.sh@235 -- # class=01 00:26:03.748 19:26:40 -- scripts/common.sh@236 -- # printf %02x 8 00:26:03.748 19:26:40 -- scripts/common.sh@236 -- # subclass=08 00:26:03.748 19:26:40 -- scripts/common.sh@237 -- # printf %02x 2 00:26:03.748 19:26:40 -- scripts/common.sh@237 -- # progif=02 00:26:03.748 19:26:40 -- scripts/common.sh@239 -- # hash lspci 00:26:03.748 19:26:40 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:26:03.748 19:26:40 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:26:03.748 19:26:40 -- scripts/common.sh@242 -- # grep -i -- -p02 00:26:03.748 19:26:40 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:03.748 19:26:40 -- scripts/common.sh@244 -- # tr -d '"' 00:26:03.749 19:26:40 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:03.749 19:26:40 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:26:03.749 19:26:40 -- scripts/common.sh@15 -- # local i 00:26:03.749 19:26:40 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:26:03.749 19:26:40 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:03.749 19:26:40 -- scripts/common.sh@24 -- # return 0 00:26:03.749 19:26:40 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:26:03.749 19:26:40 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:03.749 19:26:40 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:26:03.749 19:26:40 -- scripts/common.sh@15 -- # local i 00:26:03.749 19:26:40 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:26:03.749 19:26:40 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:03.749 19:26:40 -- scripts/common.sh@24 -- # return 0 00:26:03.749 19:26:40 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:26:03.749 19:26:40 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:03.749 19:26:40 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:26:03.749 19:26:40 -- scripts/common.sh@322 -- # uname -s 00:26:03.749 19:26:40 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:03.749 19:26:40 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:03.749 19:26:40 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:03.749 19:26:40 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:26:03.749 19:26:40 -- scripts/common.sh@322 -- # uname -s 00:26:03.749 19:26:40 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:03.749 19:26:40 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:03.749 19:26:40 -- scripts/common.sh@327 -- # (( 2 )) 00:26:03.749 19:26:40 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:26:03.749 19:26:40 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:26:03.749 19:26:40 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:26:03.749 19:26:40 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:26:03.749 19:26:40 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:26:03.749 19:26:40 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:26:03.749 19:26:40 -- common/autotest_common.sh@10 -- # set +x 00:26:03.749 ************************************ 00:26:03.749 START TEST spdk_target_abort 00:26:03.749 ************************************ 00:26:03.749 19:26:40 -- common/autotest_common.sh@1102 -- # spdk_target 00:26:03.749 19:26:40 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:03.749 19:26:40 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:03.749 19:26:40 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:26:03.749 19:26:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:03.749 19:26:40 -- common/autotest_common.sh@10 -- # set +x 00:26:03.749 spdk_targetn1 00:26:03.749 19:26:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:03.749 19:26:41 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:03.749 19:26:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:03.749 19:26:41 -- common/autotest_common.sh@10 -- # set +x 00:26:03.749 [2024-02-14 19:26:41.076426] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:03.749 19:26:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:03.749 19:26:41 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:26:03.749 19:26:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:03.749 19:26:41 -- common/autotest_common.sh@10 -- # set +x 00:26:03.749 19:26:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:03.749 19:26:41 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:26:03.749 19:26:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:03.749 19:26:41 -- common/autotest_common.sh@10 -- # set +x 00:26:03.749 19:26:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:03.749 19:26:41 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:26:03.749 19:26:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:03.749 19:26:41 -- common/autotest_common.sh@10 -- # set +x 00:26:03.749 [2024-02-14 19:26:41.104613] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:03.749 19:26:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:03.749 19:26:41 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:26:03.749 19:26:41 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:03.749 19:26:41 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:03.749 19:26:41 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:03.749 19:26:41 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:03.749 19:26:41 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:03.749 19:26:41 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:03.749 19:26:41 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:03.749 19:26:41 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:03.749 19:26:41 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:03.749 19:26:41 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:03.749 19:26:41 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:03.749 19:26:41 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:03.749 19:26:41 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:03.749 19:26:41 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:03.749 19:26:41 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:03.749 19:26:41 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:03.749 19:26:41 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:03.749 19:26:41 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:03.749 19:26:41 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:03.749 19:26:41 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:07.031 Initializing NVMe Controllers 00:26:07.031 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:07.031 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:07.031 Initialization complete. Launching workers. 00:26:07.031 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 11358, failed: 0 00:26:07.031 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1117, failed to submit 10241 00:26:07.031 success 805, unsuccess 312, failed 0 00:26:07.031 19:26:44 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:07.031 19:26:44 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:10.315 Initializing NVMe Controllers 00:26:10.315 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:10.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:10.315 Initialization complete. Launching workers. 00:26:10.315 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5942, failed: 0 00:26:10.315 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1229, failed to submit 4713 00:26:10.315 success 272, unsuccess 957, failed 0 00:26:10.315 19:26:47 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:10.315 19:26:47 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:13.600 Initializing NVMe Controllers 00:26:13.600 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:13.600 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:13.600 Initialization complete. Launching workers. 00:26:13.600 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31563, failed: 0 00:26:13.600 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2647, failed to submit 28916 00:26:13.600 success 519, unsuccess 2128, failed 0 00:26:13.600 19:26:50 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:26:13.600 19:26:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.600 19:26:50 -- common/autotest_common.sh@10 -- # set +x 00:26:13.600 19:26:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.600 19:26:50 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:13.600 19:26:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.600 19:26:50 -- common/autotest_common.sh@10 -- # set +x 00:26:13.859 19:26:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.859 19:26:51 -- target/abort_qd_sizes.sh@62 -- # killprocess 90981 00:26:13.859 19:26:51 -- common/autotest_common.sh@924 -- # '[' -z 90981 ']' 00:26:13.859 19:26:51 -- common/autotest_common.sh@928 -- # kill -0 90981 00:26:13.859 19:26:51 -- common/autotest_common.sh@929 -- # uname 00:26:13.859 19:26:51 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:26:13.859 19:26:51 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 90981 00:26:13.859 killing process with pid 90981 00:26:13.859 19:26:51 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:26:13.859 19:26:51 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:26:13.859 19:26:51 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 90981' 00:26:13.859 19:26:51 -- common/autotest_common.sh@943 -- # kill 90981 00:26:13.859 19:26:51 -- common/autotest_common.sh@948 -- # wait 90981 00:26:14.117 00:26:14.117 real 0m10.513s 00:26:14.117 user 0m42.668s 00:26:14.117 sys 0m1.789s 00:26:14.117 ************************************ 00:26:14.117 END TEST spdk_target_abort 00:26:14.117 ************************************ 00:26:14.117 19:26:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:14.117 19:26:51 -- common/autotest_common.sh@10 -- # set +x 00:26:14.376 19:26:51 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:26:14.376 19:26:51 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:26:14.376 19:26:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:26:14.376 19:26:51 -- common/autotest_common.sh@10 -- # set +x 00:26:14.376 ************************************ 00:26:14.376 START TEST kernel_target_abort 00:26:14.376 ************************************ 00:26:14.376 19:26:51 -- common/autotest_common.sh@1102 -- # kernel_target 00:26:14.376 19:26:51 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:26:14.376 19:26:51 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:26:14.376 19:26:51 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:26:14.376 19:26:51 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:26:14.376 19:26:51 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:26:14.376 19:26:51 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:14.376 19:26:51 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:14.376 19:26:51 -- nvmf/common.sh@627 -- # local block nvme 00:26:14.376 19:26:51 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:26:14.376 19:26:51 -- nvmf/common.sh@630 -- # modprobe nvmet 00:26:14.376 19:26:51 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:14.376 19:26:51 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:14.635 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:14.635 Waiting for block devices as requested 00:26:14.635 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:14.894 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:14.894 19:26:52 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:14.894 19:26:52 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:14.894 19:26:52 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:26:14.894 19:26:52 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:26:14.894 19:26:52 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:14.894 No valid GPT data, bailing 00:26:14.894 19:26:52 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:14.894 19:26:52 -- scripts/common.sh@393 -- # pt= 00:26:14.894 19:26:52 -- scripts/common.sh@394 -- # return 1 00:26:14.894 19:26:52 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:26:14.894 19:26:52 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:14.894 19:26:52 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:14.894 19:26:52 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:26:14.894 19:26:52 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:26:14.894 19:26:52 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:14.894 No valid GPT data, bailing 00:26:14.894 19:26:52 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:14.894 19:26:52 -- scripts/common.sh@393 -- # pt= 00:26:14.894 19:26:52 -- scripts/common.sh@394 -- # return 1 00:26:14.894 19:26:52 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:26:14.894 19:26:52 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:14.894 19:26:52 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:26:14.894 19:26:52 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:26:14.894 19:26:52 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:26:14.894 19:26:52 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:26:15.153 No valid GPT data, bailing 00:26:15.153 19:26:52 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:26:15.153 19:26:52 -- scripts/common.sh@393 -- # pt= 00:26:15.153 19:26:52 -- scripts/common.sh@394 -- # return 1 00:26:15.153 19:26:52 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:26:15.153 19:26:52 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:15.153 19:26:52 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:26:15.153 19:26:52 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:26:15.153 19:26:52 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:26:15.153 19:26:52 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:26:15.153 No valid GPT data, bailing 00:26:15.153 19:26:52 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:26:15.153 19:26:52 -- scripts/common.sh@393 -- # pt= 00:26:15.153 19:26:52 -- scripts/common.sh@394 -- # return 1 00:26:15.153 19:26:52 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:26:15.153 19:26:52 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:26:15.153 19:26:52 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:15.153 19:26:52 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:15.153 19:26:52 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:15.153 19:26:52 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:26:15.153 19:26:52 -- nvmf/common.sh@654 -- # echo 1 00:26:15.153 19:26:52 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:26:15.153 19:26:52 -- nvmf/common.sh@656 -- # echo 1 00:26:15.153 19:26:52 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:26:15.153 19:26:52 -- nvmf/common.sh@663 -- # echo tcp 00:26:15.153 19:26:52 -- nvmf/common.sh@664 -- # echo 4420 00:26:15.153 19:26:52 -- nvmf/common.sh@665 -- # echo ipv4 00:26:15.153 19:26:52 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:15.153 19:26:52 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:01aa9a6c-5c08-466f-9802-e7b920b153ef --hostid=01aa9a6c-5c08-466f-9802-e7b920b153ef -a 10.0.0.1 -t tcp -s 4420 00:26:15.153 00:26:15.153 Discovery Log Number of Records 2, Generation counter 2 00:26:15.153 =====Discovery Log Entry 0====== 00:26:15.153 trtype: tcp 00:26:15.153 adrfam: ipv4 00:26:15.153 subtype: current discovery subsystem 00:26:15.153 treq: not specified, sq flow control disable supported 00:26:15.153 portid: 1 00:26:15.153 trsvcid: 4420 00:26:15.153 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:15.153 traddr: 10.0.0.1 00:26:15.153 eflags: none 00:26:15.153 sectype: none 00:26:15.153 =====Discovery Log Entry 1====== 00:26:15.153 trtype: tcp 00:26:15.153 adrfam: ipv4 00:26:15.153 subtype: nvme subsystem 00:26:15.153 treq: not specified, sq flow control disable supported 00:26:15.153 portid: 1 00:26:15.153 trsvcid: 4420 00:26:15.153 subnqn: kernel_target 00:26:15.153 traddr: 10.0.0.1 00:26:15.153 eflags: none 00:26:15.153 sectype: none 00:26:15.153 19:26:52 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:26:15.153 19:26:52 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:15.153 19:26:52 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:15.153 19:26:52 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:15.153 19:26:52 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:15.153 19:26:52 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:26:15.153 19:26:52 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:15.153 19:26:52 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:15.153 19:26:52 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:15.153 19:26:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:15.153 19:26:52 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:15.153 19:26:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:15.153 19:26:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:15.153 19:26:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:15.153 19:26:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:15.153 19:26:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:15.153 19:26:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:15.153 19:26:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:15.153 19:26:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:15.153 19:26:52 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:15.154 19:26:52 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:18.440 Initializing NVMe Controllers 00:26:18.440 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:18.440 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:18.440 Initialization complete. Launching workers. 00:26:18.440 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 29626, failed: 0 00:26:18.440 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 29626, failed to submit 0 00:26:18.440 success 0, unsuccess 29626, failed 0 00:26:18.440 19:26:55 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:18.440 19:26:55 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:21.727 Initializing NVMe Controllers 00:26:21.727 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:21.727 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:21.727 Initialization complete. Launching workers. 00:26:21.727 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 71167, failed: 0 00:26:21.727 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 29516, failed to submit 41651 00:26:21.727 success 0, unsuccess 29516, failed 0 00:26:21.727 19:26:58 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:21.727 19:26:58 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:25.015 Initializing NVMe Controllers 00:26:25.015 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:25.015 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:25.015 Initialization complete. Launching workers. 00:26:25.015 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 84251, failed: 0 00:26:25.015 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 21010, failed to submit 63241 00:26:25.015 success 0, unsuccess 21010, failed 0 00:26:25.015 19:27:01 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:26:25.015 19:27:01 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:26:25.015 19:27:01 -- nvmf/common.sh@677 -- # echo 0 00:26:25.015 19:27:02 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:26:25.015 19:27:02 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:25.015 19:27:02 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:25.015 19:27:02 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:25.015 19:27:02 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:26:25.015 19:27:02 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:26:25.015 00:26:25.015 real 0m10.485s 00:26:25.015 user 0m5.152s 00:26:25.015 sys 0m2.557s 00:26:25.015 19:27:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:25.015 19:27:02 -- common/autotest_common.sh@10 -- # set +x 00:26:25.015 ************************************ 00:26:25.015 END TEST kernel_target_abort 00:26:25.015 ************************************ 00:26:25.015 19:27:02 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:26:25.015 19:27:02 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:26:25.016 19:27:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:25.016 19:27:02 -- nvmf/common.sh@116 -- # sync 00:26:25.016 19:27:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:25.016 19:27:02 -- nvmf/common.sh@119 -- # set +e 00:26:25.016 19:27:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:25.016 19:27:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:25.016 rmmod nvme_tcp 00:26:25.016 rmmod nvme_fabrics 00:26:25.016 rmmod nvme_keyring 00:26:25.016 19:27:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:25.016 19:27:02 -- nvmf/common.sh@123 -- # set -e 00:26:25.016 19:27:02 -- nvmf/common.sh@124 -- # return 0 00:26:25.016 19:27:02 -- nvmf/common.sh@477 -- # '[' -n 90981 ']' 00:26:25.016 19:27:02 -- nvmf/common.sh@478 -- # killprocess 90981 00:26:25.016 19:27:02 -- common/autotest_common.sh@924 -- # '[' -z 90981 ']' 00:26:25.016 19:27:02 -- common/autotest_common.sh@928 -- # kill -0 90981 00:26:25.016 Process with pid 90981 is not found 00:26:25.016 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 928: kill: (90981) - No such process 00:26:25.016 19:27:02 -- common/autotest_common.sh@951 -- # echo 'Process with pid 90981 is not found' 00:26:25.016 19:27:02 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:25.016 19:27:02 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:25.583 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:25.583 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:26:25.583 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:26:25.583 19:27:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:25.583 19:27:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:25.583 19:27:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:25.583 19:27:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:25.583 19:27:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.583 19:27:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:25.583 19:27:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.842 19:27:03 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:25.842 00:26:25.842 real 0m24.536s 00:26:25.842 user 0m49.153s 00:26:25.842 sys 0m5.802s 00:26:25.842 19:27:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:25.842 ************************************ 00:26:25.842 19:27:03 -- common/autotest_common.sh@10 -- # set +x 00:26:25.842 END TEST nvmf_abort_qd_sizes 00:26:25.842 ************************************ 00:26:25.842 19:27:03 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:26:25.842 19:27:03 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:26:25.842 19:27:03 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:26:25.842 19:27:03 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:26:25.842 19:27:03 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:26:25.842 19:27:03 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:26:25.842 19:27:03 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:26:25.842 19:27:03 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:26:25.842 19:27:03 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:26:25.842 19:27:03 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:26:25.842 19:27:03 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:26:25.842 19:27:03 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:26:25.842 19:27:03 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:26:25.842 19:27:03 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:26:25.842 19:27:03 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:26:25.842 19:27:03 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:26:25.842 19:27:03 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:26:25.842 19:27:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:25.842 19:27:03 -- common/autotest_common.sh@10 -- # set +x 00:26:25.842 19:27:03 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:26:25.842 19:27:03 -- common/autotest_common.sh@1369 -- # local autotest_es=0 00:26:25.842 19:27:03 -- common/autotest_common.sh@1370 -- # xtrace_disable 00:26:25.842 19:27:03 -- common/autotest_common.sh@10 -- # set +x 00:26:27.745 INFO: APP EXITING 00:26:27.745 INFO: killing all VMs 00:26:27.745 INFO: killing vhost app 00:26:27.745 INFO: EXIT DONE 00:26:28.311 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:28.311 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:26:28.311 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:26:29.292 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:29.292 Cleaning 00:26:29.292 Removing: /var/run/dpdk/spdk0/config 00:26:29.292 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:29.292 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:29.292 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:29.292 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:29.292 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:29.292 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:29.292 Removing: /var/run/dpdk/spdk1/config 00:26:29.292 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:26:29.292 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:26:29.292 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:26:29.292 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:26:29.292 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:26:29.292 Removing: /var/run/dpdk/spdk1/hugepage_info 00:26:29.292 Removing: /var/run/dpdk/spdk2/config 00:26:29.292 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:26:29.292 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:26:29.292 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:26:29.292 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:26:29.292 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:26:29.292 Removing: /var/run/dpdk/spdk2/hugepage_info 00:26:29.292 Removing: /var/run/dpdk/spdk3/config 00:26:29.292 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:26:29.292 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:26:29.292 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:26:29.292 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:26:29.292 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:26:29.292 Removing: /var/run/dpdk/spdk3/hugepage_info 00:26:29.292 Removing: /var/run/dpdk/spdk4/config 00:26:29.292 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:26:29.292 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:26:29.293 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:26:29.293 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:26:29.293 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:26:29.293 Removing: /var/run/dpdk/spdk4/hugepage_info 00:26:29.293 Removing: /dev/shm/nvmf_trace.0 00:26:29.293 Removing: /dev/shm/spdk_tgt_trace.pid55590 00:26:29.293 Removing: /var/run/dpdk/spdk0 00:26:29.293 Removing: /var/run/dpdk/spdk1 00:26:29.293 Removing: /var/run/dpdk/spdk2 00:26:29.293 Removing: /var/run/dpdk/spdk3 00:26:29.293 Removing: /var/run/dpdk/spdk4 00:26:29.293 Removing: /var/run/dpdk/spdk_pid55446 00:26:29.293 Removing: /var/run/dpdk/spdk_pid55590 00:26:29.293 Removing: /var/run/dpdk/spdk_pid55895 00:26:29.293 Removing: /var/run/dpdk/spdk_pid56171 00:26:29.293 Removing: /var/run/dpdk/spdk_pid56365 00:26:29.293 Removing: /var/run/dpdk/spdk_pid56447 00:26:29.293 Removing: /var/run/dpdk/spdk_pid56538 00:26:29.293 Removing: /var/run/dpdk/spdk_pid56631 00:26:29.293 Removing: /var/run/dpdk/spdk_pid56664 00:26:29.293 Removing: /var/run/dpdk/spdk_pid56702 00:26:29.293 Removing: /var/run/dpdk/spdk_pid56760 00:26:29.293 Removing: /var/run/dpdk/spdk_pid56894 00:26:29.293 Removing: /var/run/dpdk/spdk_pid57529 00:26:29.293 Removing: /var/run/dpdk/spdk_pid57597 00:26:29.293 Removing: /var/run/dpdk/spdk_pid57662 00:26:29.293 Removing: /var/run/dpdk/spdk_pid57689 00:26:29.293 Removing: /var/run/dpdk/spdk_pid57790 00:26:29.293 Removing: /var/run/dpdk/spdk_pid57818 00:26:29.293 Removing: /var/run/dpdk/spdk_pid57927 00:26:29.293 Removing: /var/run/dpdk/spdk_pid57955 00:26:29.293 Removing: /var/run/dpdk/spdk_pid58012 00:26:29.293 Removing: /var/run/dpdk/spdk_pid58042 00:26:29.293 Removing: /var/run/dpdk/spdk_pid58099 00:26:29.293 Removing: /var/run/dpdk/spdk_pid58129 00:26:29.293 Removing: /var/run/dpdk/spdk_pid58331 00:26:29.293 Removing: /var/run/dpdk/spdk_pid58372 00:26:29.293 Removing: /var/run/dpdk/spdk_pid58446 00:26:29.293 Removing: /var/run/dpdk/spdk_pid58526 00:26:29.293 Removing: /var/run/dpdk/spdk_pid58556 00:26:29.293 Removing: /var/run/dpdk/spdk_pid58626 00:26:29.293 Removing: /var/run/dpdk/spdk_pid58651 00:26:29.293 Removing: /var/run/dpdk/spdk_pid58685 00:26:29.293 Removing: /var/run/dpdk/spdk_pid58705 00:26:29.293 Removing: /var/run/dpdk/spdk_pid58745 00:26:29.293 Removing: /var/run/dpdk/spdk_pid58770 00:26:29.293 Removing: /var/run/dpdk/spdk_pid58810 00:26:29.293 Removing: /var/run/dpdk/spdk_pid58835 00:26:29.293 Removing: /var/run/dpdk/spdk_pid58864 00:26:29.293 Removing: /var/run/dpdk/spdk_pid58889 00:26:29.293 Removing: /var/run/dpdk/spdk_pid58924 00:26:29.293 Removing: /var/run/dpdk/spdk_pid58943 00:26:29.293 Removing: /var/run/dpdk/spdk_pid58983 00:26:29.293 Removing: /var/run/dpdk/spdk_pid59003 00:26:29.293 Removing: /var/run/dpdk/spdk_pid59037 00:26:29.293 Removing: /var/run/dpdk/spdk_pid59059 00:26:29.293 Removing: /var/run/dpdk/spdk_pid59099 00:26:29.293 Removing: /var/run/dpdk/spdk_pid59118 00:26:29.293 Removing: /var/run/dpdk/spdk_pid59153 00:26:29.293 Removing: /var/run/dpdk/spdk_pid59178 00:26:29.293 Removing: /var/run/dpdk/spdk_pid59211 00:26:29.293 Removing: /var/run/dpdk/spdk_pid59232 00:26:29.293 Removing: /var/run/dpdk/spdk_pid59272 00:26:29.293 Removing: /var/run/dpdk/spdk_pid59286 00:26:29.293 Removing: /var/run/dpdk/spdk_pid59326 00:26:29.293 Removing: /var/run/dpdk/spdk_pid59346 00:26:29.293 Removing: /var/run/dpdk/spdk_pid59380 00:26:29.293 Removing: /var/run/dpdk/spdk_pid59405 00:26:29.293 Removing: /var/run/dpdk/spdk_pid59440 00:26:29.293 Removing: /var/run/dpdk/spdk_pid59459 00:26:29.293 Removing: /var/run/dpdk/spdk_pid59494 00:26:29.293 Removing: /var/run/dpdk/spdk_pid59513 00:26:29.293 Removing: /var/run/dpdk/spdk_pid59550 00:26:29.293 Removing: /var/run/dpdk/spdk_pid59577 00:26:29.293 Removing: /var/run/dpdk/spdk_pid59614 00:26:29.293 Removing: /var/run/dpdk/spdk_pid59637 00:26:29.293 Removing: /var/run/dpdk/spdk_pid59680 00:26:29.293 Removing: /var/run/dpdk/spdk_pid59698 00:26:29.293 Removing: /var/run/dpdk/spdk_pid59734 00:26:29.293 Removing: /var/run/dpdk/spdk_pid59759 00:26:29.293 Removing: /var/run/dpdk/spdk_pid59789 00:26:29.293 Removing: /var/run/dpdk/spdk_pid59858 00:26:29.293 Removing: /var/run/dpdk/spdk_pid59974 00:26:29.293 Removing: /var/run/dpdk/spdk_pid60389 00:26:29.293 Removing: /var/run/dpdk/spdk_pid67119 00:26:29.552 Removing: /var/run/dpdk/spdk_pid67469 00:26:29.552 Removing: /var/run/dpdk/spdk_pid68675 00:26:29.552 Removing: /var/run/dpdk/spdk_pid69054 00:26:29.552 Removing: /var/run/dpdk/spdk_pid69314 00:26:29.552 Removing: /var/run/dpdk/spdk_pid69361 00:26:29.552 Removing: /var/run/dpdk/spdk_pid69619 00:26:29.552 Removing: /var/run/dpdk/spdk_pid69627 00:26:29.552 Removing: /var/run/dpdk/spdk_pid69685 00:26:29.552 Removing: /var/run/dpdk/spdk_pid69743 00:26:29.552 Removing: /var/run/dpdk/spdk_pid69802 00:26:29.552 Removing: /var/run/dpdk/spdk_pid69846 00:26:29.552 Removing: /var/run/dpdk/spdk_pid69849 00:26:29.552 Removing: /var/run/dpdk/spdk_pid69875 00:26:29.552 Removing: /var/run/dpdk/spdk_pid69913 00:26:29.552 Removing: /var/run/dpdk/spdk_pid69920 00:26:29.552 Removing: /var/run/dpdk/spdk_pid69981 00:26:29.552 Removing: /var/run/dpdk/spdk_pid70039 00:26:29.552 Removing: /var/run/dpdk/spdk_pid70099 00:26:29.552 Removing: /var/run/dpdk/spdk_pid70137 00:26:29.552 Removing: /var/run/dpdk/spdk_pid70145 00:26:29.552 Removing: /var/run/dpdk/spdk_pid70170 00:26:29.552 Removing: /var/run/dpdk/spdk_pid70459 00:26:29.552 Removing: /var/run/dpdk/spdk_pid70609 00:26:29.552 Removing: /var/run/dpdk/spdk_pid70865 00:26:29.552 Removing: /var/run/dpdk/spdk_pid70917 00:26:29.552 Removing: /var/run/dpdk/spdk_pid71290 00:26:29.552 Removing: /var/run/dpdk/spdk_pid71816 00:26:29.552 Removing: /var/run/dpdk/spdk_pid72254 00:26:29.552 Removing: /var/run/dpdk/spdk_pid73209 00:26:29.552 Removing: /var/run/dpdk/spdk_pid74182 00:26:29.552 Removing: /var/run/dpdk/spdk_pid74298 00:26:29.552 Removing: /var/run/dpdk/spdk_pid74367 00:26:29.552 Removing: /var/run/dpdk/spdk_pid75828 00:26:29.552 Removing: /var/run/dpdk/spdk_pid76064 00:26:29.552 Removing: /var/run/dpdk/spdk_pid76513 00:26:29.552 Removing: /var/run/dpdk/spdk_pid76622 00:26:29.552 Removing: /var/run/dpdk/spdk_pid76771 00:26:29.552 Removing: /var/run/dpdk/spdk_pid76817 00:26:29.552 Removing: /var/run/dpdk/spdk_pid76868 00:26:29.552 Removing: /var/run/dpdk/spdk_pid76914 00:26:29.552 Removing: /var/run/dpdk/spdk_pid77077 00:26:29.552 Removing: /var/run/dpdk/spdk_pid77230 00:26:29.552 Removing: /var/run/dpdk/spdk_pid77499 00:26:29.552 Removing: /var/run/dpdk/spdk_pid77616 00:26:29.552 Removing: /var/run/dpdk/spdk_pid78029 00:26:29.552 Removing: /var/run/dpdk/spdk_pid78414 00:26:29.552 Removing: /var/run/dpdk/spdk_pid78416 00:26:29.552 Removing: /var/run/dpdk/spdk_pid80651 00:26:29.552 Removing: /var/run/dpdk/spdk_pid80953 00:26:29.552 Removing: /var/run/dpdk/spdk_pid81436 00:26:29.552 Removing: /var/run/dpdk/spdk_pid81442 00:26:29.552 Removing: /var/run/dpdk/spdk_pid81779 00:26:29.552 Removing: /var/run/dpdk/spdk_pid81793 00:26:29.552 Removing: /var/run/dpdk/spdk_pid81807 00:26:29.552 Removing: /var/run/dpdk/spdk_pid81842 00:26:29.552 Removing: /var/run/dpdk/spdk_pid81848 00:26:29.552 Removing: /var/run/dpdk/spdk_pid81988 00:26:29.552 Removing: /var/run/dpdk/spdk_pid81990 00:26:29.552 Removing: /var/run/dpdk/spdk_pid82099 00:26:29.552 Removing: /var/run/dpdk/spdk_pid82106 00:26:29.552 Removing: /var/run/dpdk/spdk_pid82214 00:26:29.552 Removing: /var/run/dpdk/spdk_pid82217 00:26:29.552 Removing: /var/run/dpdk/spdk_pid82641 00:26:29.552 Removing: /var/run/dpdk/spdk_pid82694 00:26:29.552 Removing: /var/run/dpdk/spdk_pid82774 00:26:29.552 Removing: /var/run/dpdk/spdk_pid82823 00:26:29.552 Removing: /var/run/dpdk/spdk_pid83162 00:26:29.552 Removing: /var/run/dpdk/spdk_pid83408 00:26:29.552 Removing: /var/run/dpdk/spdk_pid83903 00:26:29.552 Removing: /var/run/dpdk/spdk_pid84462 00:26:29.552 Removing: /var/run/dpdk/spdk_pid84918 00:26:29.552 Removing: /var/run/dpdk/spdk_pid85008 00:26:29.552 Removing: /var/run/dpdk/spdk_pid85093 00:26:29.552 Removing: /var/run/dpdk/spdk_pid85183 00:26:29.552 Removing: /var/run/dpdk/spdk_pid85342 00:26:29.552 Removing: /var/run/dpdk/spdk_pid85431 00:26:29.552 Removing: /var/run/dpdk/spdk_pid85516 00:26:29.552 Removing: /var/run/dpdk/spdk_pid85607 00:26:29.552 Removing: /var/run/dpdk/spdk_pid85955 00:26:29.552 Removing: /var/run/dpdk/spdk_pid86649 00:26:29.552 Removing: /var/run/dpdk/spdk_pid87991 00:26:29.552 Removing: /var/run/dpdk/spdk_pid88195 00:26:29.552 Removing: /var/run/dpdk/spdk_pid88482 00:26:29.552 Removing: /var/run/dpdk/spdk_pid88762 00:26:29.552 Removing: /var/run/dpdk/spdk_pid89314 00:26:29.552 Removing: /var/run/dpdk/spdk_pid89323 00:26:29.552 Removing: /var/run/dpdk/spdk_pid89681 00:26:29.552 Removing: /var/run/dpdk/spdk_pid89842 00:26:29.552 Removing: /var/run/dpdk/spdk_pid90005 00:26:29.552 Removing: /var/run/dpdk/spdk_pid90102 00:26:29.811 Removing: /var/run/dpdk/spdk_pid90261 00:26:29.811 Removing: /var/run/dpdk/spdk_pid90371 00:26:29.811 Removing: /var/run/dpdk/spdk_pid91050 00:26:29.811 Removing: /var/run/dpdk/spdk_pid91081 00:26:29.811 Removing: /var/run/dpdk/spdk_pid91122 00:26:29.811 Removing: /var/run/dpdk/spdk_pid91360 00:26:29.811 Removing: /var/run/dpdk/spdk_pid91391 00:26:29.811 Removing: /var/run/dpdk/spdk_pid91431 00:26:29.811 Clean 00:26:29.811 killing process with pid 49665 00:26:29.811 killing process with pid 49670 00:26:29.811 19:27:07 -- common/autotest_common.sh@1434 -- # return 0 00:26:29.811 19:27:07 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:26:29.811 19:27:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:29.811 19:27:07 -- common/autotest_common.sh@10 -- # set +x 00:26:29.811 19:27:07 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:26:29.811 19:27:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:29.811 19:27:07 -- common/autotest_common.sh@10 -- # set +x 00:26:29.811 19:27:07 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:29.811 19:27:07 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:26:29.811 19:27:07 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:26:29.811 19:27:07 -- spdk/autotest.sh@394 -- # hash lcov 00:26:29.811 19:27:07 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:26:29.811 19:27:07 -- spdk/autotest.sh@396 -- # hostname 00:26:29.811 19:27:07 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:26:30.070 geninfo: WARNING: invalid characters removed from testname! 00:26:51.997 19:27:28 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:53.900 19:27:31 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:56.429 19:27:33 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:58.960 19:27:35 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:00.861 19:27:38 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:03.393 19:27:40 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:05.297 19:27:42 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:05.557 19:27:42 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:05.557 19:27:42 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:05.557 19:27:42 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:05.557 19:27:42 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:05.557 19:27:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.557 19:27:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.557 19:27:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.557 19:27:42 -- paths/export.sh@5 -- $ export PATH 00:27:05.557 19:27:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.557 19:27:42 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:05.557 19:27:42 -- common/autobuild_common.sh@435 -- $ date +%s 00:27:05.557 19:27:42 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1707938862.XXXXXX 00:27:05.557 19:27:42 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1707938862.Y7m6QL 00:27:05.557 19:27:42 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:27:05.557 19:27:42 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:27:05.557 19:27:42 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:27:05.557 19:27:42 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:05.557 19:27:42 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:05.557 19:27:42 -- common/autobuild_common.sh@451 -- $ get_config_params 00:27:05.557 19:27:42 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:27:05.557 19:27:42 -- common/autotest_common.sh@10 -- $ set +x 00:27:05.557 19:27:42 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:27:05.557 19:27:42 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:27:05.557 19:27:42 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:05.557 19:27:42 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:05.557 19:27:42 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:27:05.557 19:27:42 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:05.557 19:27:42 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:05.557 19:27:42 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:05.557 19:27:42 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:05.557 19:27:42 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:05.557 19:27:42 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:05.557 + [[ -n 5135 ]] 00:27:05.557 + sudo kill 5135 00:27:05.827 [Pipeline] } 00:27:05.845 [Pipeline] // timeout 00:27:05.851 [Pipeline] } 00:27:05.869 [Pipeline] // stage 00:27:05.874 [Pipeline] } 00:27:05.892 [Pipeline] // catchError 00:27:05.901 [Pipeline] stage 00:27:05.902 [Pipeline] { (Stop VM) 00:27:05.916 [Pipeline] sh 00:27:06.197 + vagrant halt 00:27:08.731 ==> default: Halting domain... 00:27:15.312 [Pipeline] sh 00:27:15.682 + vagrant destroy -f 00:27:18.215 ==> default: Removing domain... 00:27:18.228 [Pipeline] sh 00:27:18.510 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:27:18.519 [Pipeline] } 00:27:18.535 [Pipeline] // stage 00:27:18.540 [Pipeline] } 00:27:18.556 [Pipeline] // dir 00:27:18.561 [Pipeline] } 00:27:18.577 [Pipeline] // wrap 00:27:18.581 [Pipeline] } 00:27:18.596 [Pipeline] // catchError 00:27:18.604 [Pipeline] stage 00:27:18.606 [Pipeline] { (Epilogue) 00:27:18.620 [Pipeline] sh 00:27:18.902 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:24.185 [Pipeline] catchError 00:27:24.187 [Pipeline] { 00:27:24.201 [Pipeline] sh 00:27:24.483 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:24.483 Artifacts sizes are good 00:27:24.492 [Pipeline] } 00:27:24.509 [Pipeline] // catchError 00:27:24.520 [Pipeline] archiveArtifacts 00:27:24.528 Archiving artifacts 00:27:24.694 [Pipeline] cleanWs 00:27:24.705 [WS-CLEANUP] Deleting project workspace... 00:27:24.705 [WS-CLEANUP] Deferred wipeout is used... 00:27:24.712 [WS-CLEANUP] done 00:27:24.714 [Pipeline] } 00:27:24.731 [Pipeline] // stage 00:27:24.737 [Pipeline] } 00:27:24.754 [Pipeline] // node 00:27:24.760 [Pipeline] End of Pipeline 00:27:24.814 Finished: SUCCESS